WorldWideScience

Sample records for on-site computer system

  1. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  2. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  3. The feasibility of mobile computing for on-site inspection.

    Energy Technology Data Exchange (ETDEWEB)

    Horak, Karl Emanuel; DeLand, Sharon Marie; Blair, Dianna Sue

    2014-09-01

    With over 5 billion cellphones in a world of 7 billion inhabitants, mobile phones are the most quickly adopted consumer technology in the history of the world. Miniaturized, power-efficient sensors, especially video-capable cameras, are becoming extremely widespread, especially when one factors in wearable technology like Apples Pebble, GoPro video systems, Google Glass, and lifeloggers. Tablet computers are becoming more common, lighter weight, and power-efficient. In this report the authors explore recent developments in mobile computing and their potential application to on-site inspection for arms control verification and treaty compliance determination. We examine how such technology can effectively be applied to current and potential future inspection regimes. Use cases are given for both host-escort and inspection teams. The results of field trials and their implications for on-site inspections are discussed.

  4. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Campana, S; Di Girolamo, A; Espinal Curull, X; Gayazov, S; Magradze, E; Nowotka, MM; Rinaldi, L; Saiz, P; Schovancova, J; Stewart, GA; Wright, M

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  5. Fixed-site physical protection system modeling

    International Nuclear Information System (INIS)

    Chapman, L.D.

    1975-01-01

    An evaluation of a fixed-site safeguard security system must consider the interrelationships of barriers, alarms, on-site and off-site guards, and their effectiveness against a forcible adversary attack whose intention is to create an act of sabotage or theft. A computer model has been developed at Sandia Laboratories for the evaluation of alternative fixed-site security systems. Trade-offs involving on-site and off-site response forces and response times, perimeter alarm systems, barrier configurations, and varying levels of threat can be analyzed. The computer model provides a framework for performing inexpensive experiments on fixed-site security systems for testing alternative decisions, and for determining the relative cost effectiveness associated with these decision policies

  6. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  7. Cloud computing for protein-ligand binding site comparison.

    Science.gov (United States)

    Hung, Che-Lun; Hua, Guan-Jie

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.

  8. FEATURES OF SYSTEM COMPUTER SUPPORT TRAINING ON THE SCHOOL’S SITE

    Directory of Open Access Journals (Sweden)

    Petro H. Shevchuk

    2010-08-01

    Full Text Available The article considering the problems of computerization of teaching process at public educational establishments by means of global network facilities deployment. It analyses advantages and disadvantages of electronic maintenance of teaching through Internet by example of dedicated system at Miropylska gymnasium web-site. It describes the interaction of a teacher and students with the cyberspace to publish teaching material on Internet. The article also includes general recommendations on the issues of similar systems employment and describes principle directions of their further development.

  9. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  10. On-Site Renewable Energy and Green Buildings: A System-Level Analysis.

    Science.gov (United States)

    Al-Ghamdi, Sami G; Bilec, Melissa M

    2016-05-03

    Adopting a green building rating system (GBRSs) that strongly considers use of renewable energy can have important environmental consequences, particularly in developing countries. In this paper, we studied on-site renewable energy and GBRSs at the system level to explore potential benefits and challenges. While we have focused on GBRSs, the findings can offer additional insight for renewable incentives across sectors. An energy model was built for 25 sites to compute the potential solar and wind power production on-site and available within the building footprint and regional climate. A life-cycle approach and cost analysis were then completed to analyze the environmental and economic impacts. Environmental impacts of renewable energy varied dramatically between sites, in some cases, the environmental benefits were limited despite the significant economic burden of those renewable systems on-site and vice versa. Our recommendation for GBRSs, and broader policies and regulations, is to require buildings with higher environmental impacts to achieve higher levels of energy performance and on-site renewable energy utilization, instead of fixed percentages.

  11. Effectiveness evaluation of alternative fixed-site safeguard security systems

    International Nuclear Information System (INIS)

    Chapman, L.D.

    1976-01-01

    An evaluation of a fixed-site physical protection system must consider the interrelationships of barriers, alarms, on-site and off-site guards, and their effectiveness against a forcible adversary attack intent on creating an act of sabotage of theft. A computer model, Forcible Entry Safeguard Effectiveness Model (FESEM), was developed for the evaluation of alternative fixed-site protection systems. It was written in the GASP IV simulation language. A hypothetical fixed-state protection system is defined and relative evaluations from a cost-effectiveness point of view are presented in order to demonstrate how the model can be used. Trade-offs involving on-site and off-site response forces and response times, perimeter system alarms, barrier configurations, and varying levels of threat are analyzed. The computer model provides a framework for performing inexpensive experiments on fixed-site security systems, for testing alternative decisions, and for determining the relative cost effectiveness associated with these decision policies

  12. On-Site Inspection RadioIsotopic Spectroscopy (Osiris) System Development

    Energy Technology Data Exchange (ETDEWEB)

    Caffrey, Gus J. [Idaho National Laboratory, Idaho Falls, ID (United States); Egger, Ann E. [Idaho National Laboratory, Idaho Falls, ID (United States); Krebs, Kenneth M. [Idaho National Laboratory, Idaho Falls, ID (United States); Milbrath, B. D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jordan, D. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Warren, G. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wilmer, N. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-01

    We have designed and tested hardware and software for the acquisition and analysis of high-resolution gamma-ray spectra during on-site inspections under the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The On-Site Inspection RadioIsotopic Spectroscopy—Osiris—software filters the spectral data to display only radioisotopic information relevant to CTBT on-site inspections, e.g.,132I. A set of over 100 fission-product spectra was employed for Osiris testing. These spectra were measured, where possible, or generated by modeling. The synthetic test spectral compositions include non-nuclear-explosion scenarios, e.g., a severe nuclear reactor accident, and nuclear-explosion scenarios such as a vented underground nuclear test. Comparing its computer-based analyses to expert visual analyses of the test spectra, Osiris correctly identifies CTBT-relevant fission product isotopes at the 95% level or better.The Osiris gamma-ray spectrometer is a mechanically-cooled, battery-powered ORTEC Transpec-100, chosen to avoid the need for liquid nitrogen during on-site inspections. The spectrometer was used successfully during the recent 2014 CTBT Integrated Field Exercise in Jordan. The spectrometer is controlled and the spectral data analyzed by a Panasonic Toughbook notebook computer. To date, software development has been the main focus of the Osiris project. In FY2016-17, we plan to modify the Osiris hardware, integrate the Osiris software and hardware, and conduct rigorous field tests to ensure that the Osiris system will function correctly during CTBT on-site inspections. The planned development will raise Osiris to technology readiness level TRL-8; transfer the Osiris technology to a commercial manufacturer, and demonstrate Osiris to potential CTBT on-site inspectors.

  13. On-Site Inspection RadioIsotopic Spectroscopy (Osiris) System Development

    International Nuclear Information System (INIS)

    Caffrey, Gus J.; Egger, Ann E.; Krebs, Kenneth M.; Milbrath, B. D.; Jordan, D. V.; Warren, G. A.; Wilmer, N. G.

    2015-01-01

    We have designed and tested hardware and software for the acquisition and analysis of high-resolution gamma-ray spectra during on-site inspections under the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The On-Site Inspection RadioIsotopic Spectroscopy-Osiris-software filters the spectral data to display only radioisotopic information relevant to CTBT on-site inspections, e.g.,132I. A set of over 100 fission-product spectra was employed for Osiris testing. These spectra were measured, where possible, or generated by modeling. The synthetic test spectral compositions include non-nuclear-explosion scenarios, e.g., a severe nuclear reactor accident, and nuclear-explosion scenarios such as a vented underground nuclear test. Comparing its computer-based analyses to expert visual analyses of the test spectra, Osiris correctly identifies CTBT-relevant fission product isotopes at the 95% level or better.The Osiris gamma-ray spectrometer is a mechanically-cooled, battery-powered ORTEC Transpec-100, chosen to avoid the need for liquid nitrogen during on-site inspections. The spectrometer was used successfully during the recent 2014 CTBT Integrated Field Exercise in Jordan. The spectrometer is controlled and the spectral data analyzed by a Panasonic Toughbook notebook computer. To date, software development has been the main focus of the Osiris project. In FY2016-17, we plan to modify the Osiris hardware, integrate the Osiris software and hardware, and conduct rigorous field tests to ensure that the Osiris system will function correctly during CTBT on-site inspections. The planned development will raise Osiris to technology readiness level TRL-8; transfer the Osiris technology to a commercial manufacturer, and demonstrate Osiris to potential CTBT on-site inspectors.

  14. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  15. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  16. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  17. Secure computing on reconfigurable systems

    OpenAIRE

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the attestation of the executed functions. The use of SC on reconfigurable devices has the advantage of being highly adaptable to the application and the user requirements, while providing high performa...

  18. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  19. Defense strategies for cloud computing multi-site server infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; He, Fei [Texas A& M University, Kingsville, TX, USA

    2018-01-01

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, and also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.

  20. The use of computer decision-making support systems to justify address rehabilitation of the Semipalatinsk test site area

    OpenAIRE

    Viktoria V. Zaets; Alexey V. Panov

    2011-01-01

    The paper describes the development of a range of optimal protective measures for remediation of the territory of the Semipalatinsk Test Site. The computer system for decision-making support, ReSCA, was employed for the estimations. Costs and radiological effectiveness of countermeasures were evaluated.

  1. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  2. The use of computer decision-making support systems to justify address rehabilitation of the Semipalatinsk test site area

    Directory of Open Access Journals (Sweden)

    Viktoria V. Zaets

    2011-05-01

    Full Text Available The paper describes the development of a range of optimal protective measures for remediation of the territory of the Semipalatinsk Test Site. The computer system for decision-making support, ReSCA, was employed for the estimations. Costs and radiological effectiveness of countermeasures were evaluated.

  3. Computer-based tools for decision support at the Hanford Site

    International Nuclear Information System (INIS)

    Doctor, P.G.; Mahaffey, J.A.; Cowley, P.J.; Freshley, M.D.; Hassig, N.L.; Brothers, J.W.; Glantz, C.S.; Strachan, D.M.

    1992-11-01

    To help integrate activities in the environmental restoration and waste management mission of the Hanford Site, the Hanford Integrated Planning Project (HIPP) was established and funded by the US Department of Energy. The project is divided into three key program elements, the first focusing on an explicit, defensible and comprehensive method for evaluating technical options. Based on the premise that computer technology can be used to support the decision-making process and facilitate integration among programs and activities, the Decision Support Tools Task was charged with assessing the status of computer technology for those purposes at the Site. The task addressed two types of tools: tools need to provide technical information and management support tools. Technical tools include performance and risk assessment models, information management systems, data and the computer infrastructure to supports models, data, and information management systems. Management decision support tools are used to synthesize information at a high' level to assist with making decisions. The major conclusions resulting from the assessment are that there is much technical information available, but it is not reaching the decision-makers in a form to be used. Many existing tools provide components that are needed to integrate site activities; however, some components are missing and, more importantly, the ''glue'' or connections to tie the components together to answer decision-makers questions is largely absent. Top priority should be given to decision support tools that support activities given in the TPA. Other decision tools are needed to facilitate and support the environmental restoration and waste management mission

  4. Computer-based tools for decision support at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, P.G.; Mahaffey, J.A.; Cowley, P.J.; Freshley, M.D.; Hassig, N.L.; Brothers, J.W.; Glantz, C.S.; Strachan, D.M.

    1992-11-01

    To help integrate activities in the environmental restoration and waste management mission of the Hanford Site, the Hanford Integrated Planning Project (HIPP) was established and funded by the US Department of Energy. The project is divided into three key program elements, the first focusing on an explicit, defensible and comprehensive method for evaluating technical options. Based on the premise that computer technology can be used to support the decision-making process and facilitate integration among programs and activities, the Decision Support Tools Task was charged with assessing the status of computer technology for those purposes at the Site. The task addressed two types of tools: tools need to provide technical information and management support tools. Technical tools include performance and risk assessment models, information management systems, data and the computer infrastructure to supports models, data, and information management systems. Management decision support tools are used to synthesize information at a high' level to assist with making decisions. The major conclusions resulting from the assessment are that there is much technical information available, but it is not reaching the decision-makers in a form to be used. Many existing tools provide components that are needed to integrate site activities; however, some components are missing and, more importantly, the glue'' or connections to tie the components together to answer decision-makers questions is largely absent. Top priority should be given to decision support tools that support activities given in the TPA. Other decision tools are needed to facilitate and support the environmental restoration and waste management mission.

  5. Computer-based tools for decision support at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, P.G.; Mahaffey, J.A.; Cowley, P.J.; Freshley, M.D.; Hassig, N.L.; Brothers, J.W.; Glantz, C.S.; Strachan, D.M.

    1992-11-01

    To help integrate activities in the environmental restoration and waste management mission of the Hanford Site, the Hanford Integrated Planning Project (HIPP) was established and funded by the US Department of Energy. The project is divided into three key program elements, the first focusing on an explicit, defensible and comprehensive method for evaluating technical options. Based on the premise that computer technology can be used to support the decision-making process and facilitate integration among programs and activities, the Decision Support Tools Task was charged with assessing the status of computer technology for those purposes at the Site. The task addressed two types of tools: tools need to provide technical information and management support tools. Technical tools include performance and risk assessment models, information management systems, data and the computer infrastructure to supports models, data, and information management systems. Management decision support tools are used to synthesize information at a high` level to assist with making decisions. The major conclusions resulting from the assessment are that there is much technical information available, but it is not reaching the decision-makers in a form to be used. Many existing tools provide components that are needed to integrate site activities; however, some components are missing and, more importantly, the ``glue`` or connections to tie the components together to answer decision-makers questions is largely absent. Top priority should be given to decision support tools that support activities given in the TPA. Other decision tools are needed to facilitate and support the environmental restoration and waste management mission.

  6. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  7. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  8. The Atmospheric Release Advisory Capability Site Workstation System

    International Nuclear Information System (INIS)

    Foster, K.T.; Sumikawa, D.A.; Foster, C.S.; Baskett, R.L.

    1993-01-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response service that assesses the consequences that may result from an atmospheric release of toxic material. ARAC was developed by the Lawrence Livermore National Laboratory (LLNL) for the Departments of Energy (DOE) and Defense (DOD) and responds principally to radiological accidents. ARAC provides radiological health and safety guidance to decision makers in the form of computer-generated estimates of the effects of an actual, or potential release of radioactive material into the atmosphere. Upon receipt of the release scenario, the ARAC assessment staff extracts meteorological, topographic, and geographic data from resident world-wide databases for use in complex, three-dimensional transport and diffusion models. These dispersion models generate air concentration (or dose) and ground deposition contour plots showing estimates of the contamination patterns produced as the toxic material is carried by the prevailing winds. To facilitate the ARAC response to a release from specific DOE and DOD sites and to provide these sites with a local emergency response tool, a remote Site Workstation System (SWS) is being placed at various ARAC-supported facilities across the country.. This SWS replaces the existing antiquated ARAC Site System now installed at many of these sites. The new system gives users access to complex atmospheric dispersion models that may be run either by the ARAC staff at LLNL, or (in a later phase of the system) by site personnel using the computational resources of the SWS. Supporting this primary function are a variety of SWS-resident supplemental capabilities that include meteorological data acquisition, manipulation of release-specific databases, computer-based communications, and the use of a simpler Gaussian trajectory puff model that is based on Environmental Protection Agency's INPUFF code

  9. Accuracy of biopsy needle navigation using the Medarpa system - computed tomography reality superimposed on the site of intervention

    International Nuclear Information System (INIS)

    Khan, M. Fawad; Maataoui, Adel; Gurung, Jessen; Schiemann, Mirko; Vogl, Thomas J.; Dogan, Selami; Ackermann, Hanns; Wesarg, Stefan; Sakas, Georgios

    2005-01-01

    The aim of this work was to determine the accuracy of a new navigational system, Medarpa, with a transparent display superimposing computed tomography (CT) reality on the site of intervention. Medarpa uses an optical and an electromagnetic tracking system which allows tracking of instruments, the radiologist and the transparent display. The display superimposes a CT view of a phantom chest on a phantom chest model, in real time. In group A, needle positioning was performed using the Medarpa system. Three targets (diameter 1.5 mm) located inside the phantom were punctured. In group B, the same targets were used to perform standard CT-guided puncturing using the single-slice technique. The same needles were used in both groups (15 G, 15 cm). A total of 42 punctures were performed in each group. Post puncture, CT scans were made to verify needle tip positions. The mean deviation from the needle tip to the targets was 6.65±1.61 mm for group A (range 3.54-9.51 mm) and 7.05±1.33 mm for group B (range 4.10-9.45 mm). No significant difference was found between group A and group B for any target (p>0.05). No significant difference was found between the targets of the same group (p>0.05). The accuracy in needle puncturing using the augmented reality system, Medarpa, matches the accuracy achieved by CT-guided puncturing technique. (orig.)

  10. Developing computer systems to support emergency operations: Standardization efforts by the Department of Energy and implementation at the DOE Savannah River Site

    International Nuclear Information System (INIS)

    DeBusk, R.E.; Fulton, G.J.; O'Dell, J.J.

    1990-01-01

    This paper describes the development of standards for emergency operations computer systems for the US Department of Energy (DOE). The proposed DOE computer standards prescribe the necessary power and simplicity to meet the expanding needs of emergency managers. Standards include networked UNIX workstations based on the client server model and software that presents information graphically using icons and windowing technology. DOE standards are based on those of the computer industry although Proposed DOE is implementing the latest technology to ensure a solid base for future growth. A case of how these proposed standards are being implemented is also presented. The Savannah River Site (SRS), a DOE facility near Aiken, South Carolina is automating a manual information system, proven over years of development. This system is generalized as a model that can apply to most, if not all, Emergency Operations Centers. This model can provide timely and validated information to emergency managers. By automating this proven system, the system is made easier to use. As experience in the case study demonstrates, computers are only an effective information tool when used as part of a proven process

  11. SICOM: On-site inspection systems

    International Nuclear Information System (INIS)

    Serna, J.J.; Quecedo, M.; Fernandez, J.R.

    2002-01-01

    As the irradiation conditions become more demanding for the fuel than in the past, there is a need for surveillance programs to gather in-reactor operating experience. The data obtained in these programs can be used to assess the performance of current fuel designs and the improvements incorporated to the fuel assembly design, the performance of the advanced cladding alloys, etc. In these regards, valuable data is obtained from on-site fuel inspections. These on-site data comprise fuel assembly dimensional data such as length and distortion (tilt, twist and bow) and fuel rod data such as length and oxide thickness. These data have to be reliable and accurate to be useful thus, demanding a high precision inspection equipment. However, the inspection equipment has to be also robust and flexible enough to operate in the plant spent fuel pool and, sometimes, without interfering in the works carried out during a plant outage. To meet these requirements, during the past years ENUSA and TECNATOM have developed two on-site inspection systems. While the first system can perform most of the typical measurements in a stand-alone manner thus, without interfering with the critical path of the reload, the second one reduces the inspection time but requires using the plant capabilities. The paper describes both equipment for fuel on-site inspection, their characteristics and main features. (author)

  12. A climatological model for risk computations incorporating site- specific dry deposition influences

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.

    1991-07-01

    A gradient-flux dry deposition module was developed for use in a climatological atmospheric transport model, the Multimedia Environmental Pollutant Assessment System (MEPAS). The atmospheric pathway model computes long-term average contaminant air concentration and surface deposition patterns surrounding a potential release site incorporating location-specific dry deposition influences. Gradient-flux formulations are used to incorporate site and regional data in the dry deposition module for this atmospheric sector-average climatological model. Application of these formulations provide an effective means of accounting for local surface roughness in deposition computations. Linkage to a risk computation module resulted in a need for separate regional and specific surface deposition computations. 13 refs., 4 figs., 2 tabs

  13. Analyzing Dental Implant Sites From Cone Beam Computed Tomography Scans on a Tablet Computer: A Comparative Study Between iPad and 3 Display Systems.

    Science.gov (United States)

    Carrasco, Alejandro; Jalali, Elnaz; Dhingra, Ajay; Lurie, Alan; Yadav, Sumit; Tadinada, Aditya

    2017-06-01

    The aim of this study was to compare a medical-grade PACS (picture archiving and communication system) monitor, a consumer-grade monitor, a laptop computer, and a tablet computer for linear measurements of height and width for specific implant sites in the posterior maxilla and mandible, along with visualization of the associated anatomical structures. Cone beam computed tomography (CBCT) scans were evaluated. The images were reviewed using PACS-LCD monitor, consumer-grade LCD monitor using CB-Works software, a 13″ MacBook Pro, and an iPad 4 using OsiriX DICOM reader software. The operators had to identify anatomical structures in each display using a 2-point scale. User experience between PACS and iPad was also evaluated by means of a questionnaire. The measurements were very similar for each device. P-values were all greater than 0.05, indicating no significant difference between the monitors for each measurement. The intraoperator reliability was very high. The user experience was similar in each category with the most significant difference regarding the portability where the PACS display received the lowest score and the iPad received the highest score. The iPad with retina display was comparable with the medical-grade monitor, producing similar measurements and image visualization, and thus providing an inexpensive, portable, and reliable screen to analyze CBCT images in the operating room during the implant surgery.

  14. Cyber Security on Nuclear Power Plant's Computer Systems

    International Nuclear Information System (INIS)

    Shin, Ick Hyun

    2010-01-01

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  15. Computer-Based Testing: Test Site Security.

    Science.gov (United States)

    Rosen, Gerald A.

    Computer-based testing places great burdens on all involved parties to ensure test security. A task analysis of test site security might identify the areas of protecting the test, protecting the data, and protecting the environment as essential issues in test security. Protecting the test involves transmission of the examinations, identifying the…

  16. Journal of Computer Science and Its Application: Site Map

    African Journals Online (AJOL)

    Journal of Computer Science and Its Application: Site Map. Journal Home > About the Journal > Journal of Computer Science and Its Application: Site Map. Log in or Register to get access to full text downloads.

  17. An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Sites

    Directory of Open Access Journals (Sweden)

    Iliya Georgiev

    2017-04-01

    Full Text Available The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a a geographically unique identification scheme supporting a global key space for a key-value store; (b a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c a broadcast of the archaeological information as an Extensible Markup Language (XML stream over the Web for worldwide availability; and (d a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology.

  18. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  19. Role of computers in CANDU safety systems

    International Nuclear Information System (INIS)

    Hepburn, G.A.; Gilbert, R.S.; Ichiyen, N.M.

    1985-01-01

    Small digital computers are playing an expanding role in the safety systems of CANDU nuclear generating stations, both as active components in the trip logic, and as monitoring and testing systems. The paper describes three recent applications: (i) A programmable controller was retro-fitted to Bruce ''A'' Nuclear Generating Station to handle trip setpoint modification as a function of booster rod insertion. (ii) A centralized monitoring computer to monitor both shutdown systems and the Emergency Coolant Injection system, is currently being retro-fitted to Bruce ''A''. (iii) The implementation of process trips on the CANDU 600 design using microcomputers. While not truly a retrofit, this feature was added very late in the design cycle to increase the margin against spurious trips, and has now seen about 4 unit-years of service at three separate sites. Committed future applications of computers in special safety systems are also described. (author)

  20. Heat recovery subsystem and overall system integration of fuel cell on-site integrated energy systems

    Science.gov (United States)

    Mougin, L. J.

    1983-01-01

    The best HVAC (heating, ventilating and air conditioning) subsystem to interface with the Engelhard fuel cell system for application in commercial buildings was determined. To accomplish this objective, the effects of several system and site specific parameters on the economic feasibility of fuel cell/HVAC systems were investigated. An energy flow diagram of a fuel cell/HVAC system is shown. The fuel cell system provides electricity for an electric water chiller and for domestic electric needs. Supplemental electricity is purchased from the utility if needed. An excess of electricity generated by the fuel cell system can be sold to the utility. The fuel cell system also provides thermal energy which can be used for absorption cooling, space heating and domestic hot water. Thermal storage can be incorporated into the system. Thermal energy is also provided by an auxiliary boiler if needed to supplement the fuel cell system output. Fuel cell/HVAC systems were analyzed with the TRACE computer program.

  1. Computer System Resource Requirements of Novice Programming Students.

    Science.gov (United States)

    Nutt, Gary J.

    The characteristics of jobs that constitute the mix for lower division FORTRAN classes in a university were investigated. Samples of these programs were also benchmarked on a larger central site computer and two minicomputer systems. It was concluded that a carefully chosen minicomputer system could offer service at least the equivalent of the…

  2. Merchandising-computer trials 1995 - studies on six merchandising systems; Apteringsdatortest 1995 - studier av sex apteringssystem

    Energy Technology Data Exchange (ETDEWEB)

    Sondell, J.; Essen, I. von

    1996-11-01

    Skogforsk has conducted a series of studies on the second generation of merchandising computers (bucking-to-value computers) available on the Swedish market. The harvesters were moved to the same study site in southern Sweden. Each machine processed about 50 Norway spruce trees and the price lists and study conditions were kept the same for all six systems. The layout of the systems was described and the ergonomics briefly evaluated. The accuracy of the length and diameter measuring and how the systems exploited the value of the wood were studied. The results will be used both by the manufacturers for future development and by the forest companies as a basis for investment decisions. 11 refs, 4 figs, 5 tabs

  3. FFTF fission gas monitor computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. A dual computer system has been developed to monitor the reactor cover gas to detect and characterize any fuel or test pin fission gas releases. The system acquires gamma spectra data, identifies isotopes, calculates specific isotope and overall cover gas activity, presents control room alarms and displays, and records and prints data and analysis reports. The fission gas monitor system makes extensive use of commercially available hardware and software, providing a reliable and easily maintained system. The design provides extensive automation of previous manual operations, reducing the need for operator training and minimizing the potential for operator error. The dual nature of the system allows one monitor to be taken out of service for periodic tests or maintenance without interrupting the overall system functions. A built-in calibrated gamma source can be controlled by the computer, allowing the system to provide rapid system self tests and operational performance reports

  4. Development of plant status display system for on-site educational training system

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Fujimoto, Junzo; Okamoto, Hisatake; Tsunoda, Ryohei; Watanabe, Takao; Masuko, Jiro.

    1986-01-01

    The purpose of this system is to make easy the comprehension of the facility and dynamics of nuclear power plants. This report describes the tendency and future position of how the educational training system should be, and furthermore describes the experiment. Main results are as follows. 1. The present status and the future tendency of educational training system for nuclear power plant operators. CAI (Computer Assisted Instruction) system has following characteristics. (1) It is easy to introduce plant specific characteristics to the educational training. (2) It is easy to execute the detailed training for the compensation of the full-scale simulator. 2. Plant status display system for on-site educational training system. The fundamental function of the system is as follows. (1) It has 2 CRT displays and voice output devices. (2) It has easy manupulation type of man-machine interface. (3) It has the function for the evaluation of the training results. 3. The effectiveness of this system. The effectiveness evaluation test has been carried out by using this system actually. (1) This system has been proved to be essentially effective and some improvements for the future utilization has been pointed out. (2) It should be faster when the CRT displayes are changed, and it should have the explanation function when the plant transients are displayed. (author)

  5. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  6. Interactive computer enhanced remote viewing system

    International Nuclear Information System (INIS)

    Smith, D.A.; Tourtellott, J.A.

    1994-01-01

    The Interactive, Computer Enhanced, Remote Viewing System (ICERVSA) is a volumetric data system designed to help the Department of Energy (DOE) improve remote operations in hazardous sites by providing reliable and accurate maps of task spaces where robots will clean up nuclear wastes. The ICERVS mission is to acquire, store, integrate and manage all the sensor data for a site and to provide the necessary tools to facilitate its visualization and interpretation. Empirical sensor data enters through the Common Interface for Sensors and after initial processing, is stored in the Volumetric Database. The data can be analyzed and displayed via a Graphic User Interface with a variety of visualization tools. Other tools permit the construction of geometric objects, such as wire frame models, to represent objects which the operator may recognize in the live TV image. A computer image can be generated that matches the viewpoint of the live TV camera at the remote site, facilitating access to site data. Lastly, the data can be gathered, processed, and transmitted in acceptable form to a robotic controller. Descriptions are given of all these components. The final phase of the ICERVS project, which has just begun, will produce a full scale system and demonstrate it at a DOE site to be selected. A task added to this Phase will adapt the ICERVS to meet the needs of the Dismantlement and Decommissioning (D and D) work at the Oak Ridge National Laboratory (ORNL)

  7. Effects Of Social Networking Sites (SNSs) On Hyper Media Computer Mediated Environments (HCMEs)

    OpenAIRE

    Yoon C. Cho

    2011-01-01

    Social Networking Sites (SNSs) are known as tools to interact and build relationships between users/customers in Hyper Media Computer Mediated Environments (HCMEs). This study explored how social networking sites play a significant role in communication between users. While numerous researchers examined the effectiveness of social networking websites, few studies investigated which factors affected customers attitudes and behavior toward social networking sites. In this paper, the authors inv...

  8. The commissioning of CMS sites: Improving the site reliability

    International Nuclear Information System (INIS)

    Belforte, S; Fisk, I; Flix, J; Hernandez, J M; Klem, J; Letts, J; Magini, N; Saiz, P; Sciaba, A

    2010-01-01

    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.

  9. Methods and systems for identifying ligand-protein binding sites

    KAUST Repository

    Gao, Xin

    2016-05-06

    The invention provides a novel integrated structure and system-based approach for drug target prediction that enables the large-scale discovery of new targets for existing drugs Novel computer-readable storage media and computer systems are also provided. Methods and systems of the invention use novel sequence order-independent structure alignment, hierarchical clustering, and probabilistic sequence similarity techniques to construct a probabilistic pocket ensemble (PPE) that captures even promiscuous structural features of different binding sites for a drug on known targets. The drug\\'s PPE is combined with an approximation of the drug delivery profile to facilitate large-scale prediction of novel drug- protein interactions with several applications to biological research and drug development.

  10. QUEST Hanford Site Computer Users - What do they do?

    Energy Technology Data Exchange (ETDEWEB)

    WITHERSPOON, T.T.

    2000-03-02

    The Fluor Hanford Chief Information Office requested that a computer-user survey be conducted to determine the user's dependence on the computer and its importance to their ability to accomplish their work. Daily use trends and future needs of Hanford Site personal computer (PC) users was also to be defined. A primary objective was to use the data to determine how budgets should be focused toward providing those services that are truly needed by the users.

  11. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  12. Impact of new computing systems on finite element computations

    International Nuclear Information System (INIS)

    Noor, A.K.; Fulton, R.E.; Storaasi, O.O.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified

  13. Cyber Security on Nuclear Power Plant's Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ick Hyun [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2010-10-15

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  14. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths am clear of obstacles. This need for a task space model is most pronounced in the remediation of obsolete production facilities and underground storage tanks. Production facilities at many sites contain compact process machinery and systems that were used to produce weapons grade material. For many such systems, a complex maze of pipes (with potentially dangerous contents) must be removed, and this represents a significant D ampersand D challenge. In an analogous way, the underground storage tanks at sites such as Hanford represent a challenge because of their limited entry and the tumbled profusion of in-tank hardware. In response to this need, the Interactive Computer-Enhanced Remote Viewing System (ICERVS) is being designed as a software system to: (1) Provide a reliable geometric description of a robotic task space, and (2) Enable robotic remediation to be conducted more effectively and more economically than with available techniques. A system such as ICERVS is needed because of the problems discussed below

  15. Overview of the DIII-D program computer systems

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1997-11-01

    Computer systems pervade every aspect of the DIII-D National Fusion Research program. This includes real-time systems acquiring experimental data from data acquisition hardware; cpu server systems performing short term and long term data analysis; desktop activities such as word processing, spreadsheets, and scientific paper publication; and systems providing mechanisms for remote collaboration. The DIII-D network ties all of these systems together and connects to the ESNET wide area network. This paper will give an overview of these systems, including their purposes and functionality and how they connect to other systems. Computer systems include seven different types of UNIX systems (HP-UX, REALIX, SunOS, Solaris, Digital UNIX, Ultrix, and IRIX), OpenVMS systems (both BAX and Alpha), MACintosh, Windows 95, and more recently Windows NT systems. Most of the network internally is ethernet with some use of FDDI. A T3 link connects to ESNET and thus to the Internet. Recent upgrades to the network have notably improved its efficiency, but the demand for bandwidth is ever increasing. By means of software and mechanisms still in development, computer systems at remote sites are playing an increasing role both in accessing and analyzing data and even participating in certain controlling aspects for the experiment. The advent of audio/video over the interest is now presenting a new means for remote sites to participate in the DIII-D program

  16. [Three dimensional CT reconstruction system on a personal computer].

    Science.gov (United States)

    Watanabe, E; Ide, T; Teramoto, A; Mayanagi, Y

    1991-03-01

    A new computer system to produce three dimensional surface image from CT scan has been invented. Although many similar systems have been already developed and reported, they are too expensive to be set up in routine clinical services because most of these systems are based on high power mini-computer systems. According to the opinion that a practical 3D-CT system should be used in daily clinical activities using only a personal computer, we have transplanted the 3D program into a personal computer working in MS-DOS (16-bit, 12 MHz). We added to the program a routine which simulates surgical dissection on the surface image. The time required to produce the surface image ranges from 40 to 90 seconds. To facilitate the simulation, we connected a 3D system with the neuronavigator. The navigator gives the position of the surgical simulation when the surgeon places the navigator tip on the patient's head thus simulating the surgical excision before the real dissection.

  17. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  18. ON-BOARD COMPUTER SYSTEM FOR KITSAT-1 AND 2

    Directory of Open Access Journals (Sweden)

    H. S. Kim

    1996-06-01

    Full Text Available KITSAT-1 and 2 are microsatellites weighting 50kg and all the on-board data are processed by the on-board computer system. Hence, these on-board computers require to be highly reliable and be designed with tight power consumption, mass and size constraints. On-board computer(OBC systems for KITSAT-1 and 2 are also designed with a simple flexible hardware for reliability and software takes more responsibility than hardware. KITSAT-1 and 2 on-board computer system consist of OBC 186 as the primary OBC and OBC80 as its backup. OBC186 runs spacecraft operating system (SCOS which has real-time multi-tasking capability. Since their launch, OBC186 and OBC80 have been operating successfully until today. In this paper, we describe the development of OBC186 hardware and software and analyze its in-orbit operation performance.

  19. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  20. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1996-09-27

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This document outlines the negotiated requirements as agreed to by GTE Northwest during technical contract discussions. This system defines a commercial off-the-shelf computer dispatching system providing both test and graphic display information while interfacing with diverse alarm reporting system within the Hanford Site. This system provided expansion capability to integrate Hanford Fire and the Occurrence Notification Center. The system also provided back-up capability for the Plutonium Processing Facility (PFP).

  1. Radiological findings of the chondroblastomas on the atypical sites of the skeleton system

    International Nuclear Information System (INIS)

    Zhang He; Yao Weiwu; Yang Shixun; Li Minghua; Cheng Yingsheng; Zhang Huizhen

    2007-01-01

    Objective: To review the radiological findings of the chondroblastomas on the atypical sites of the skeleton system. Methods: We collected the total image data of 13 patients who were pathologically confirmed the chondroblastomas on the atypical sites of the bone system from the department of orthopedics in shanghai No. 6 hospital since 1991. Among all the patients, 11 eases were male and others were female. The range of age was 10-50 years and the average age of the patients was 26.2 years old. A retrospective analysis of radiological signs from different diagnostic imaging modalities was made. Results: X-ray examination was underdone on all case. On the plain X-ray films, all cases were lyric lesions. The radiolucent lesions were seen in 10 cases, mixed density in 3 cases. 10 cases manifested expansible contour. Eleven cases were performed computed tomography (CT) examination. On CT, there were visible calcification in 8 cases, sclerotic margin in 10 cases, internal septation in 4 cases. Soft masses could be seen in 3 cases. Magnetic resonance examination (MRI) was done on 5 cases. On T 1 weighted images (T 1 WI) , the lesion was hypo and intermediate intense signal and heterogeneous hyperintense signal on T 2 weighed images (T 2 WI). The fluid-fluid level and solid-fluid level were seen on 3 cases. On one post-contrast examination, the moderate enhancement was seen on the solid portion of the tumor and however, the obvious enhancement on the septation within the lesion. Conclusion: The radiological findings of the chondroblastomas on the atypical sites of the bone system were not suggestive. However, it could display some particular signs of the chondroid tumors such as calcification, septation, etc. To effectively apply the different imaging modalities can be helpful to make a right diagnosis before the operation. (authors)

  2. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  3. The use of computer-assisted interactive videodisc training in reactor operations at the Savannah River site

    International Nuclear Information System (INIS)

    Shiplett, D.W.

    1990-01-01

    This presentation discussed the use of computer aided training at Savannah River Site using a computer-assisted interactive videodisc system. This system was used in situations where there was a high frequency of training required, where there were a large number of people to be trained and where there was a rigid work schedule. The system was used to support classroom training to emphasize major points, display graphics of flowpaths, for simulations, and video of actual equipment

  4. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    International Nuclear Information System (INIS)

    Andreeva, J; Campos, M Devesas; Cros, J Tarragon; Gaidioz, B; Karavakis, E; Kokoszkiewicz, L; Lanciotti, E; Maier, G; Ollivier, W; Nowotka, M; Rocha, R; Sadykov, T; Saiz, P; Sargsyan, L; Sidorova, I; Tuckett, D

    2011-01-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  5. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  6. A Nuclear Safety System based on Industrial Computer

    International Nuclear Information System (INIS)

    Kim, Ji Hyeon; Oh, Do Young; Lee, Nam Hoon; Kim, Chang Ho; Kim, Jae Hack

    2011-01-01

    The Plant Protection System(PPS), a nuclear safety Instrumentation and Control (I and C) system for Nuclear Power Plants(NPPs), generates reactor trip on abnormal reactor condition. The Core Protection Calculator System (CPCS) is a safety system that generates and transmits the channel trip signal to the PPS on an abnormal condition. Currently, these systems are designed on the Programmable Logic Controller(PLC) based system and it is necessary to consider a new system platform to adapt simpler system configuration and improved software development process. The CPCS was the first implementation using a micro computer in a nuclear power plant safety protection system in 1980 which have been deployed in Ulchin units 3,4,5,6 and Younggwang units 3,4,5,6. The CPCS software was developed in the Concurrent Micro5 minicomputer using assembly language and embedded into the Concurrent 3205 computer. Following the micro computer based CPCS, PLC based Common-Q platform has been used for the ShinKori/ShinWolsong units 1,2 PPS and CPCS, and the POSAFE-Q PLC platform is used for the ShinUlchin units 1,2 PPS and CPCS. In developing the next generation safety system platform, several factors (e.g., hardware/software reliability, flexibility, licensibility and industrial support) can be considered. This paper suggests an Industrial Computer(IC) based protection system that can be developed with improved flexibility without losing system reliability. The IC based system has the advantage of a simple system configuration with optimized processor boards because of improved processor performance and unlimited interoperability between the target system and development system that use commercial CASE tools. This paper presents the background to selecting the IC based system with a case study design of the CPCS. Eventually, this kind of platform can be used for nuclear power plant safety systems like the PPS, CPCS, Qualified Indication and Alarm . Pami(QIAS-P), and Engineering Safety

  7. A Nuclear Safety System based on Industrial Computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Oh, Do Young; Lee, Nam Hoon; Kim, Chang Ho; Kim, Jae Hack [Korea Electric Power Corporation Engineering and Construction, Daejeon (Korea, Republic of)

    2011-05-15

    The Plant Protection System(PPS), a nuclear safety Instrumentation and Control (I and C) system for Nuclear Power Plants(NPPs), generates reactor trip on abnormal reactor condition. The Core Protection Calculator System (CPCS) is a safety system that generates and transmits the channel trip signal to the PPS on an abnormal condition. Currently, these systems are designed on the Programmable Logic Controller(PLC) based system and it is necessary to consider a new system platform to adapt simpler system configuration and improved software development process. The CPCS was the first implementation using a micro computer in a nuclear power plant safety protection system in 1980 which have been deployed in Ulchin units 3,4,5,6 and Younggwang units 3,4,5,6. The CPCS software was developed in the Concurrent Micro5 minicomputer using assembly language and embedded into the Concurrent 3205 computer. Following the micro computer based CPCS, PLC based Common-Q platform has been used for the ShinKori/ShinWolsong units 1,2 PPS and CPCS, and the POSAFE-Q PLC platform is used for the ShinUlchin units 1,2 PPS and CPCS. In developing the next generation safety system platform, several factors (e.g., hardware/software reliability, flexibility, licensibility and industrial support) can be considered. This paper suggests an Industrial Computer(IC) based protection system that can be developed with improved flexibility without losing system reliability. The IC based system has the advantage of a simple system configuration with optimized processor boards because of improved processor performance and unlimited interoperability between the target system and development system that use commercial CASE tools. This paper presents the background to selecting the IC based system with a case study design of the CPCS. Eventually, this kind of platform can be used for nuclear power plant safety systems like the PPS, CPCS, Qualified Indication and Alarm . Pami(QIAS-P), and Engineering Safety

  8. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  9. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  10. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  11. Unusual sites of metastatic recurrence of osteosarcoma detected on fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography

    International Nuclear Information System (INIS)

    Kabnurkar, Rasika; Agrawal, Archi; Rekhi, Bharat; Purandare, Nilendu; Shah, Sneha; Rangarajan, Venkatesh

    2015-01-01

    Osteosarcoma (OS) is the most common nonhematolymphoid primary bone malignancy characterized by osteoid or new bone formation. Lungs and bones are the most common sites of metastases. We report a case where unusual sites of the soft tissue recurrence from OS were detected on restaging fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography scan done post 6 years of disease free interval

  12. Second Asia-Pacific Conference on the Computer Aided System Engineering

    CERN Document Server

    Chaczko, Zenon; Jacak, Witold; Łuba, Tadeusz; Computational Intelligence and Efficiency in Engineering Systems

    2015-01-01

    This carefully edited and reviewed volume addresses the increasingly popular demand for seeking more clarity in the data that we are immersed in. It offers excellent examples of the intelligent ubiquitous computation, as well as recent advances in systems engineering and informatics. The content represents state-of-the-art foundations for researchers in the domain of modern computation, computer science, system engineering and networking, with many examples that are set in industrial application context. The book includes the carefully selected best contributions to APCASE 2014, the 2nd Asia-Pacific Conference on  Computer Aided System Engineering, held February 10-12, 2014 in South Kuta, Bali, Indonesia. The book consists of four main parts that cover data-oriented engineering science research in a wide range of applications: computational models and knowledge discovery; communications networks and cloud computing; computer-based systems; and data-oriented and software-intensive systems.

  13. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters

    Directory of Open Access Journals (Sweden)

    Abreu Rui MV

    2010-10-01

    Full Text Available Abstract Background Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. Implementation MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. Conclusion MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a

  14. Wired well sites: future of data acquisition based on open architecture

    Energy Technology Data Exchange (ETDEWEB)

    Marsters, S.

    1999-10-01

    Up to recent times, there have been two primary ways to access process information such as flow volumes, pressure, temperature, surface cards, on/off status, fluid levels, torque/rpm and location, from remote sites. One way was to drive or helicopter to the site and retrieve the data. The other method was using SCADA, a data acquisition system composed of a remote terminal unit (RTU) to measure data at the site, a communication system to relay the data from the site to a central location, and a host computer system at the central site to receive and display the data brought in by the communication system. This paper is devoted to the examination of several of the communication systems, such as cellular digital packet data modems, low earth orbiting satellite systems and geosynchronous satellite systems, and devices that permit communication with remote host computer systems, such as processor assisted connectors (PAC) and the Intersat-developed global data network (GDN). With these proprietary devices in place, companies have the freedom to select virtually any type of communications system and combine it with virtually any type of output device to serve as the host computer. The GDN can automatically recognize the type of data transmitted, the source, the destination, and whether the information is scheduled or unscheduled, and raise an alarm in the case unscheduled messages to initiate immediate response. A second level of Intersat service (IP ANYWHERE) includes the capability for customers to check a well's status at any time they wish, to alter flow volume, or turn a valve on or off. The potential for IP ANYWHERE is estimated to be excellent, especially in difficult-to-access remote locations. 2 photos.

  15. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  16. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  17. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  18. Experiment Dashboard - a generic, scalable solution for monitoring of the LHC computing activities, distributed sites and services

    International Nuclear Information System (INIS)

    Andreeva, J; Cinquilli, M; Dieguez, D; Dzhunov, I; Karavakis, E; Karhula, P; Kenyon, M; Kokoszkiewicz, L; Nowotka, M; Ro, G; Saiz, P; Tuckett, D; Sargsyan, L; Schovancova, J

    2012-01-01

    The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services. It has been one of the key elements during the commissioning of the distributed computing systems of the LHC experiments. The first years of data taking represented a serious test for Experiment Dashboard in terms of functionality, scalability and performance. And given that the usage of the Experiment Dashboard applications has been steadily increasing over time, it can be asserted that all the objectives were fully accomplished.

  19. Development of a Computer Writing System Based on EOG.

    Science.gov (United States)

    López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian

    2017-06-26

    The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  20. Development of a Computer Writing System Based on EOG

    Directory of Open Access Journals (Sweden)

    Alberto López

    2017-06-01

    Full Text Available The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1 A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2 A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3 A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  1. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  2. Procedures for Computing Site Seismicity

    Science.gov (United States)

    1994-02-01

    Fourth World Conference on Earthquake Engineering, Santiago, Chile , 1969. Schnabel, P.B., J. Lysmer, and H.B. Seed (1972). SHAKE, a computer program for...This fault system is composed of the Elsinore and Whittier fault zones, Agua Caliente fault, and Earthquake Valley fault. Five recent earthquakes of

  3. Computer-Mediated Communications Systems: Will They Catch On?

    Science.gov (United States)

    Cook, Dave; Ridley, Michael

    1990-01-01

    Describes the use of CoSy, a computer conferencing system, by academic librarians at McMaster University in Ontario. Computer-mediated communications systems (CMCS) are discussed, the use of the system for electronic mail and computer conferencing is described, the perceived usefulness of CMCS is examined, and a sidebar explains details of the…

  4. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  5. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  6. A dynamical-systems approach for computing ice-affected streamflow

    Science.gov (United States)

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  7. Computer-based route-definition system for peripheral bronchoscopy.

    Science.gov (United States)

    Graham, Michael W; Gibbs, Jason D; Higgins, William E

    2012-04-01

    Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.

  8. Computers and School Nurses in a Financially Stressed School System: The Case of St. Louis

    Science.gov (United States)

    Cummings, Scott

    2013-01-01

    This article describes the incorporation of computer technology into the professional lives of school nurses. St. Louis, Missouri, a major urban school system, is the site of the study. The research describes several major impacts computer technology has on the professional responsibilities of school nurses. Computer technology not only affects…

  9. Conceptual design of a cover system for the degmay uranium tailings site

    Energy Technology Data Exchange (ETDEWEB)

    Vatsidin, Saidov; David, S. Kessel; Kim, Chang Lak [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2016-06-15

    The Republic of Tajikistan has ten former uranium mining sites. The total volume of all tailings is approximately 55 million tonnes, and the covered area is more than 200 hectares. The safe management of legacy uranium mining and tailing sites has become an issue of concern. Depending on the performance requirements and site-specific conditions (location in an arid, semiarid or humid region), a cover system for uranium tailings sites could be constructed using several material layers using both natural and man-made materials. The purpose of this study is to find a feasible cost-effective cover system design for the Degmay uranium tailings site which could provide a long period (100 years) of protection. The HELP computer code was used in the evaluation of potential Degmay cover system designs. As a result of this study, a cover system with 70 cm thick percolation layer, 30 cm thick drainage layer, geomembrane liner and 60 cm thick barrier soil layer is recommended because it minimizes cover thickness and would be the most cost-effective design.

  10. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  11. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  12. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  13. Computer aid in rescue organisation on site in case of catastrophic situation on nuclear plant

    International Nuclear Information System (INIS)

    Teissier, M.

    1992-01-01

    The rescue organisation in case of catastrophic situation is based on known principles: creation of medical buffer structures between hazard spot where injured people are being collected and rear hospitals, triage of victims as urgent casualties. We will propose computer aid in order to value the time used to prepare and evacuate all the victims from the site, knowing inventory of available means, waiting periods and lengths of intervention, types and number of victims. Thus, it is possible to optimize the former organisation, qualitatively and quantitatively to improve efficiency in rescuing operations. (author)

  14. Computer-based planning of optimal donor sites for autologous osseous grafts

    Science.gov (United States)

    Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin

    2002-05-01

    Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.

  15. Computational design of trimeric influenza-neutralizing proteins targeting the hemagglutinin receptor binding site

    Energy Technology Data Exchange (ETDEWEB)

    Strauch, Eva-Maria; Bernard, Steffen M.; La, David; Bohn, Alan J.; Lee, Peter S.; Anderson, Caitlin E.; Nieusma, Travis; Holstein, Carly A.; Garcia, Natalie K.; Hooper, Kathryn A.; Ravichandran, Rashmi; Nelson, Jorgen W.; Sheffler, William; Bloom, Jesse D.; Lee, Kelly K.; Ward, Andrew B.; Yager, Paul; Fuller, Deborah H.; Wilson, Ian A.; Baker , David (UWASH); (Scripps); (FHCRC)

    2017-06-12

    Many viral surface glycoproteins and cell surface receptors are homo-oligomers1, 2, 3, 4, and thus can potentially be targeted by geometrically matched homo-oligomers that engage all subunits simultaneously to attain high avidity and/or lock subunits together. The adaptive immune system cannot generally employ this strategy since the individual antibody binding sites are not arranged with appropriate geometry to simultaneously engage multiple sites in a single target homo-oligomer. We describe a general strategy for the computational design of homo-oligomeric protein assemblies with binding functionality precisely matched to homo-oligomeric target sites5, 6, 7, 8. In the first step, a small protein is designed that binds a single site on the target. In the second step, the designed protein is assembled into a homo-oligomer such that the designed binding sites are aligned with the target sites. We use this approach to design high-avidity trimeric proteins that bind influenza A hemagglutinin (HA) at its conserved receptor binding site. The designed trimers can both capture and detect HA in a paper-based diagnostic format, neutralizes influenza in cell culture, and completely protects mice when given as a single dose 24 h before or after challenge with influenza.

  16. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically

  17. Design considerations for on-site spent-fuel transfer systems

    International Nuclear Information System (INIS)

    Jones, R.H.; Jones, C.R.

    1989-06-01

    Studies on spent fuel shipping logistics and operation make it clear that the use of large casks, i.e., 100--125 tons, is superior to smaller casks of similar construction. This superiority manifests itself in both transportation and/or shipping economics and safety as well as reduced personnel exposure in the processing of the casks. An on-site system for the transfer of spent fuel from the storage pool to a large shipping or storage cask, as well as the transfer of spent fuel directly from a storage cask to a shipping cask, could bring the large cask benefits to those restricted reactors. Sensing the need to look more closely at this opportunity, EPRI contracted with S. Levy, Incorporated of Campbell, CA to develop a set of design considerations for such transfer systems. Rather then embark on another design study, EPRI decided to first identify the system considerations that must be factored into any design. The format for this effort presents both the Consideration and the Rationale for the consideration. The resulting work identified thirty-six General Considerations and two Special Considerations. The Considerations are in the form of mandatory requirements and desirable but nonmandatory requirements. Additionally, a brief economic study was performed to get a feel for the cost considerations of on-site transfers. The study results suggest a relatively narrow set of scenarios where on-site transfers are economically superior to alternatives. These scenarios generally involve the use of concrete casks as on-site storage devices

  18. Review of the reliability of Bruce 'B' RRS dual computer system

    International Nuclear Information System (INIS)

    Arsenault, J.E.; Manship, R.A.; Levan, D.G.

    1995-07-01

    The review presents an analysis of the Bruce 'B' Reactor Regulating System (RRS) Digital Control Computer (DCC) system, based on system documentation, significant event reports (SERs), question sets, and a site visit. The intent is to evaluate the reliability of the RRS DCC and to identify the possible scenarios that could lead to a serious process failure. The evaluation is based on three relatively independent analyses, which are integrated and presented in the form of Conclusions and Recommendations

  19. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  20. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  1. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  2. FFTF integrated leak rate computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. The FFTF is the only reactor of this type designed and operated to meet the licensing requirements of the Nuclear Regulatory Commission. Unique characteristics of the FFTF that present special challenges related to leak rate testing include thin wall containment vessel construction, cover gas systems that penetrate containment, and a low-pressure design basis accident. The successful completion of the third FFTF integrated leak rate test 5 days ahead of schedule and 10% under budget was a major achievement for the Westinghouse Hanford Company. The success of this operational safety test was due in large part to a special network (LAN) of three IBM PC/XT computers, which monitored the sensor data, calculated the containment vessel leak rate, and displayed test results. The equipment configuration allowed continuous monitoring of the progress of the test independent of the data acquisition and analysis functions, and it also provided overall improved system reliability by permitting immediate switching to backup computers in the event of equipment failure

  3. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  4. CASDAC system: On-Site Multiplexer user's guide

    International Nuclear Information System (INIS)

    Yamamoto, Yoichi; Koyama, Kinji

    1993-03-01

    The CASDAC (Containment and Surveillance Data Authenticated Communication) system has been developed by JAERI for nuclear safeguards and physical protection of nuclear material. This system is a remote monitoring system for continual verification of security and safeguards status of nuclear material. The CASDAC system consists of two subsystems, one of them is a Grand Command Center (GCC) subsystem and the other is a facility subsystem. This report describes the outline and usage of the On-Site Multiplexer (OSM), which controls all other equipments in a facility subsystem and communicates with the GCC. This work has been carried out in the framework of Japan Support Programme for Agency Safeguards (JASPAS) as a project, JA-1. (author)

  5. Patch-up system on GEC 4080 computer

    International Nuclear Information System (INIS)

    Bryden, A.D.

    1978-02-01

    The patch-up system for the rescue of events on bubble chamber film was described in Rutherford Laboratory Report RHEL-R-190 (1970). The present report highlights the changes that have had to be made to the system in the transfer from IBM 360/195 to the GEC 4080 computer and should be used in conjuction with the earlier report. (U.K.)

  6. Assessment of On-site sanitation system on local groundwater regime in an alluvial aquifer

    Science.gov (United States)

    Quamar, Rafat; Jangam, C.; Veligeti, J.; Chintalapudi, P.; Janipella, R.

    2017-12-01

    The present study is an attempt to study the impact of the On-site sanitation system on the groundwater sources in its vicinity. The study has been undertaken in the Agra city of Yamuna sub-basin. In this context, sampling sites (3 nos) namely Pandav Nagar, Ayodhya Kunj and Laxmi Nagar were selected for sampling. The groundwater samples were analyzed for major cations, anions and faecal coliform. Critical parameters namely chloride, nitrate and Faecal coliform were considered to assess the impact of the On-site sanitation systems. The analytical results shown that except for chloride, most of the samples exceeded the Bureau of Indian Standard limits for drinking water for all the other analyzed parameters, i.e., nitrate and faecal coliform in the first two sites. In Laxmi Nagar, except for faecal coliform, all the samples are below the BIS limits. In all the three sites, faecal coliform was found in majority of the samples. A comparison of present study indicates that the contamination of groundwater in alluvial setting is less as compared to hard rock where On-site sanitation systems have been implemented.

  7. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  8. CERN’s Computing rules updated to include policy for control systems

    CERN Multimedia

    IT Department

    2008-01-01

    The use of CERN’s computing facilities is governed by rules defined in Operational Circular No. 5 and its subsidiary rules of use. These rules are available from the web site http://cern.ch/ComputingRules. Please note that the subsidiary rules for Internet/Network use have been updated to include a requirement that control systems comply with the CNIC(Computing and Network Infrastructure for Control) Security Policy. The security policy for control systems, which was approved earlier this year, can be accessed at https://edms.cern.ch/document/584092 IT Department

  9. Phonon transmission and thermal conductance in one-dimensional system with on-site potential disorder

    International Nuclear Information System (INIS)

    Ma Songshan; Xu Hui; Deng Honggui; Yang Bingchu

    2011-01-01

    The role of on-site potential disorder on phonon transmission and thermal conductance of one-dimensional system is investigated. We found that the on-site potential disorder can lead to the localization of phonons, and has great effect on the phonon transmission and thermal conductance of the system. As on-site potential disorder W increases, the transmission coefficients decrease, and approach zero at the band edges. Corresponding, the thermal conductance decreases drastically, and the curves for thermal conductance exhibit a series of steps and plateaus. Meanwhile, when the on-site potential disorder W is strong enough, the thermal conductance decreases dramatically with the increase of system size N. We also found that the efficiency of reducing thermal conductance by increasing the on-site potential disorder strength is much better than that by increasing the on-site potential's amplitude. - Highlights: → We studied the effect of on-site potential disorder on thermal transport. → Increasing disorder will decrease thermal transport. → Increasing system size will also decrease its thermal conductance. → Increasing disorder is more efficient than other in reducing thermal conductance.

  10. Proposal for a Similar Question Search System on a Q&A Site

    Directory of Open Access Journals (Sweden)

    Katsutoshi Kanamori

    2014-06-01

    Full Text Available There is a service to help Internet users obtain answers to specific questions when they visit a Q&A site. A Q&A site is very useful for the Internet user, but posted questions are often not answered immediately. This delay in answering occurs because in most cases another site user is answering the question manually. In this study, we propose a system that can present a question that is similar to a question posted by a user. An advantage of this system is that a user can refer to an answer to a similar question. This research measures the similarity of a candidate question based on word and dependency parsing. In an experiment, we examined the effectiveness of the proposed system for questions actually posted on the Q&A site. The result indicates that the system can show the questioner the answer to a similar question. However, the system still has a number of aspects that should be improved.

  11. Drainage Systems Effect on Surgical Site Infection in Children with Perforated Appendicitis

    Directory of Open Access Journals (Sweden)

    Seref Kilic

    2016-09-01

    Full Text Available Aim: Effect of replacing open drainage system to closed drainage system on surgical site infection (SSI in children operated for perforated appendicitis was evaluated. Material and Method: Hospital files and computer records of perforated appendicitis cases operated in 2004-2010 were evaluated retrospectively. Open drainage systems were used for 70 in cases (group I and closed systems were used in the others (group II. Results: Eleven of SSI cases had superficial infection and 3 had the organ/space infection. SSI rate was 15.7% for group I and 7.5% for the group II. The antibiotic treatment length was 7.5 ± 3.4 days for group I and 6.4 ± 2.2 days for group II and the difference between groups was not statistically significant. Hospitalization length for group I was 8.2 ± 3.1 days and 6.8 ± 1.9 days for group II and the difference was statistically significant. Discussion: SSI is an important problem increasing morbidity and treatment costs through increasing hospitalization and antibiotic treatment length. Open drainage system used in operation in patients with perforated appendicitis leads an increased frequency of SSI when compared to the closed drainage system. Thus, closed drainage systems should be preferred in when drainage is necessary in operations for perforated appendicitis in children.

  12. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  13. Searching your site`s management information systems

    Energy Technology Data Exchange (ETDEWEB)

    Marquez, W.; Rollin, C. [S.M. Stoller Corp., Boulder, CO (United States)

    1994-12-31

    The Department of Energy`s guidelines for the Baseline Environmental Management Report (BEMR) encourage the use of existing data when compiling information. Specific systems mentioned include the Progress Tracking System, the Mixed-Waste Inventory Report, the Waste Management Information System, DOE 4700.1-related systems, Programmatic Environmental Impact Statement (PEIS) data, and existing Work Breakdown Structures. In addition to these DOE-Headquarters tracking and reporting systems, there are a number of site systems that will be relied upon to produce the BEMR, including: (1) site management control and cost tracking systems; (2) commitment/issues tracking systems; (3) program-specific internal tracking systems; (4) Site material/equipment inventory systems. New requirements have often prompted the creation of new, customized tracking systems. This is a very time and money consuming process. As the BEMR Management Plan emphasizes, an effort should be made to use the information in existing tracking systems. Because of the wealth of information currently available from in-place systems, development of a new tracking system should be a last resort.

  14. CAISSE (Computer Aided Information System on Solar Energy) technical manual

    Energy Technology Data Exchange (ETDEWEB)

    Cantelon, P E; Beinhauer, F W

    1979-01-01

    The Computer Aided Information System on Solar Energy (CAISSE) was developed to provide the general public with information on solar energy and its potential uses and costs for domestic consumption. CAISSE is an interactive computing system which illustrates solar heating concepts through the use of 35 mm slides, text displays on a screen and a printed report. The user communicates with the computer by responding to questions about his home and heating requirements through a touch sensitive screen. The CAISSE system contains a solar heating simulation model which calculates the heating load capable of being supplied by a solar heating system and uses this information to illustrate installation costs, fuel savings and a 20 year life-cycle analysis of cost and benefits. The system contains several sets of radiation and weather data for Canada and USA. The selection of one of four collector models is based upon the requirements input during the computer session. Optimistic and pessimistic fuel cost forecasts are made for oil, natural gas, electricity, or propane; and the forecasted fuel cost is made the basis of the life cycle cost evaluation for the solar heating application chosen. This manual is organized so that each section describes one major aspect of the use of solar energy systems to provide energy for domestic consumption. The sources of data and technical information and the method of incorporating them into the CAISSE display system are described in the same order as the computer processing. Each section concludes with a list of future developments that could be included to make CAISSE outputs more regionally specific and more useful to designers. 19 refs., 1 tab.

  15. Development of a computer writing system based on EOG

    OpenAIRE

    López, A.; Ferrero, F.; Yangüela, D.; Álvarez, C.; Postolache, O.

    2017-01-01

    WOS:000407517600044 (Nº de Acesso Web of Science) The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical i...

  16. A study on the nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Kim, Yeon Seung; Huh, Young Hwan; Lee, Jong Bok; Choi, Young Gil; Suh, Soong Hyok; Kang, Byong Heon; Kim, Hee Kyung; Kim, Ko Ryeo; Park, Soo Jin

    1990-12-01

    According to current software development and quality assurance trends. It is necessary to develop computer code management system for nuclear programs. For this reason, the project started in 1987. Main objectives of the project are to establish a nuclear computer code management system, to secure software reliability, and to develop nuclear computer code packages. Contents of performing the project in this year were to operate and maintain computer code information system of KAERI computer codes, to develop application tool, AUTO-i, for solving the 1st and 2nd moments of inertia on polygon or circle, and to research nuclear computer code conversion between different machines. For better supporting the nuclear code availability and reliability, assistance from users who are using codes is required. Lastly, for easy reference about the codes information, we presented list of code names and information on the codes which were introduced or developed during this year. (Author)

  17. Satellite Power System (SPS) mapping of exclusion areas for rectenna sites

    Science.gov (United States)

    Blackburn, J. B., Jr.; Bavinger, B. A.

    1978-01-01

    The areas of the United States that were not available as potential sites for receiving antennas that are an integral part of the Satellite Power System concept are presented. Thirty-six variables with the potential to exclude the rectenna were mapped and coded in a computer. Some of these variables exclude a rectenna from locating within the area of its spatial influence, and other variables potentially exclude the rectenna. These maps of variables were assembled from existing data and were mapped on a grid system.

  18. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  19. Disposal Site Information Management System

    International Nuclear Information System (INIS)

    Larson, R.A.; Jouse, C.A.; Esparza, V.

    1986-01-01

    An information management system for low-level waste shipped for disposal has been developed for the Nuclear Regulatory Commission (NRC). The Disposal Site Information Management System (DSIMS) was developed to provide a user friendly computerized system, accessible through NRC on a nationwide network, for persons needing information to facilitate management decisions. This system has been developed on NOMAD VP/CSS, and the data obtained from the operators of commercial disposal sites are transferred to DSIMS semiannually. Capabilities are provided in DSIMS to allow the user to select and sort data for use in analysis and reporting low-level waste. The system also provides means for describing sources and quantities of low-level waste exceeding the limits of NRC 10 CFR Part 61 Class C. Information contained in DSIMS is intended to aid in future waste projections and economic analysis for new disposal sites

  20. Design characteristics of EU-APR1400 on-site power system

    International Nuclear Information System (INIS)

    Kim, D.H.; Kim, Y.S.; Kim, Y.S.

    2014-01-01

    In the global nuclear market, US and European design requirements have been largely used to develop the design of nuclear power plants(NPPs). The APR1400 design was developed on the basis of US regulatory guide and EPRI utility requirements document(URD). In order to enlarge the export market of APR1400, KHNP (Korea Hydro & Nuclear Power Co., Ltd) has developed the EU-APR1400 design which complies with the European nuclear design requirements. In this paper, the design characteristics of EU-APR1400 on-site power system developed according to the European design requirements of electrical power system are described. The European main design requirements of electrical power system involve 50 Hz rated frequency, 400/110 kV grid voltage, the application of the diversity and the redundancy, and so on. The EU-APR1400 on-site power system has been developed on the basis of these requirements. The representative designs include the redundancy, diversity, independence design, the emergency power supply design, the design for providing electrical power to the dedicated severe accident systems, and the design for European grid requirements. (author)

  1. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  2. VIBRO-DIAGNOSTIC SYSTEM ON BASIS OF PERSONAL COMPUTER

    Directory of Open Access Journals (Sweden)

    V. V. Bokut

    2007-01-01

    Full Text Available A system for vibration diagnostics based on a mobile computer and two-channel microprocessor measuring device has been developed. Usage of fast Hartley-Fourier transform allows to increase frequency resolution up to 25000 spectral lines that makes it possible to use the system for wide range of applications. 

  3. Effects of computing time delay on real-time control systems

    Science.gov (United States)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  4. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  5. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  6. A dynamic system for ATLAS software installation on OSG grid sites

    International Nuclear Information System (INIS)

    Zhao, X; Maeno, T; Wenaus, T; Leuhring, F; Youssef, S; Brunelle, J; De Salvo, A; Thompson, A S

    2010-01-01

    A dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this paper, we discuss the issues encountered in the previous software installation system, and introduce the new approach, which is built upon the new development in the areas of the ATLAS workload management system (PanDA), and software package management system (pacman). It is also designed to integrate with the EGEE ATLAS software installation framework. In the new system, ATLAS software releases are packaged as pacball, a uniquely identifiable and reproducible self-installing data file. The distribution of pacballs to remote sites is managed by ATLAS data management system (DQ2) and PanDA server. The installation on remote sites is automatically triggered by the PanDA pilot jobs. The installation job payload connects to a central ATLAS software installation portal, making the information of installation status easily accessible across OSG and EGEE Grids. The issues encountered in running the new system in production, and our future plan for improvement, will also be discussed.

  7. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  8. Total-System Performance Assessment for the Yucca Mountain Site

    International Nuclear Information System (INIS)

    Wilson, M.L.

    2001-01-01

    Yucca Mountain, Nevada, is under consideration as a potential site for a repository for high-level radioactive waste. Total-system performance-assessment simulations are performed to evaluate the safety of the site. Features, events, and processes have been systematically evaluated to determine which ones are significant to the safety assessment. Computer models of the disposal system have been developed within a probabilistic framework, including both engineered and natural components. Selected results are presented for three different total-system simulations, and the behavior of the disposal system is discussed. The results show that risk is dominated by igneous activity at early times, because the robust waste-package design prevents significant nominal (non-disruptive) releases for tens of thousands of years or longer. The uncertainty in the nominal performance is dominated by uncertainties related to waste-package corrosion at early times and by uncertainties in the natural system, most significantly infiltration, at late times

  9. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  10. SAMO [Sistema de Apoyo Mechanizado a la Operacion]: An operational aids computer system

    International Nuclear Information System (INIS)

    Stormer, T.D.; Laflor, E.V.

    1989-01-01

    SAMO (Sistema de Apoyo Mechanizado a la Operacion) is a sensor-driven, computer-based, graphic display system designed by Westinghouse to aid the A. N. Asco operations staff during all modes of plant operations, including emergencies. The SAMO system is being implemented in the A. N. Asco plant in two phases that coincide with consecutive refueling outages for each of two nuclear units at the Asco site. Phase 1 of the SAMO system implements the following functions: (1) emergency operational aids, (2) postaccident monitoring, (3) plant graphics display, (4) high-speed transient analysis recording, (5) historical data collection, storage, and retrieval, (6) sequence of events, and (7) posttrip review. During phase 2 of the SAMO project, the current plant computer will be removed and the functions now performed by the plant computer will be performed by the SAMO system. In addition, the following functions will be implemented: (1) normal and simple transients operational aid, (2) plant information graphics; and (3) real-time radiological off-site dose calculation

  11. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  12. An Annotated and Cross-Referenced Bibliography on Computer Security and Access Control in Computer Systems.

    Science.gov (United States)

    Bergart, Jeffrey G.; And Others

    This paper represents a careful study of published works on computer security and access control in computer systems. The study includes a selective annotated bibliography of some eighty-five important published results in the field and, based on these papers, analyzes the state of the art. In annotating these works, the authors try to be…

  13. WIPP conceptual design report. Addendum M. Computer system and data processing requirements for Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Young, R.

    1977-06-01

    Data-processing requirements for the Waste Isolation Pilot Plant (WIPP) dictate a computing system that can provide a wide spectrum of data-processing needs on a 24-hour-day basis over an indeterminate time. A computer system is defined as a computer or computers complete with all peripheral equipment and extensive software and communications capabilities, including an operating system, compilers, assemblers, loaders, etc., all applicable to real-world problems. The computing system must be extremely reliable and easily expandable in both hardware and software to provide for future capabilities with a minimum impact on the existing applications software and operating system. The computer manufacturer or WIPP operating contractor must provide continuous on-site computer maintenance (maintain an adequate inventory of spare components and parts to guarantee a minimum mean-time-to-repair of any portion of the computer system). The computer operating system or monitor must process a wide mix of application programs and languages, yet be readily changeable to obtain maximum computer usage. The WIPP computing system must handle three general types of data processing requirements: batch, interactive, and real-time. These are discussed. Data bases, data collection systems, scientific and business systems, building and facilities, remote terminals and locations, and cables are also discussed

  14. Man-machine interfaces analysis system based on computer simulation

    International Nuclear Information System (INIS)

    Chen Xiaoming; Gao Zuying; Zhou Zhiwei; Zhao Bingquan

    2004-01-01

    The paper depicts a software assessment system, Dynamic Interaction Analysis Support (DIAS), based on computer simulation technology for man-machine interfaces (MMI) of a control room. It employs a computer to simulate the operation procedures of operations on man-machine interfaces in a control room, provides quantified assessment, and at the same time carries out analysis on operational error rate of operators by means of techniques for human error rate prediction. The problems of placing man-machine interfaces in a control room and of arranging instruments can be detected from simulation results. DIAS system can provide good technical supports to the design and improvement of man-machine interfaces of the main control room of a nuclear power plant

  15. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  16. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  17. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  18. An intelligent and secure system for predicting and preventing Zika virus outbreak using Fog computing

    Science.gov (United States)

    Sareen, Sanjay; Gupta, Sunil Kumar; Sood, Sandeep K.

    2017-10-01

    Zika virus is a mosquito-borne disease that spreads very quickly in different parts of the world. In this article, we proposed a system to prevent and control the spread of Zika virus disease using integration of Fog computing, cloud computing, mobile phones and the Internet of things (IoT)-based sensor devices. Fog computing is used as an intermediary layer between the cloud and end users to reduce the latency time and extra communication cost that is usually found high in cloud-based systems. A fuzzy k-nearest neighbour is used to diagnose the possibly infected users, and Google map web service is used to provide the geographic positioning system (GPS)-based risk assessment to prevent the outbreak. It is used to represent each Zika virus (ZikaV)-infected user, mosquito-dense sites and breeding sites on the Google map that help the government healthcare authorities to control such risk-prone areas effectively and efficiently. The proposed system is deployed on Amazon EC2 cloud to evaluate its performance and accuracy using data set for 2 million users. Our system provides high accuracy of 94.5% for initial diagnosis of different users according to their symptoms and appropriate GPS-based risk assessment.

  19. Upgrade of the computer-based information systems on USNRC simulators

    International Nuclear Information System (INIS)

    Griffin, J.I.

    1998-01-01

    In late 1995, the U.S. Nuclear Regulatory Commission (USNRC) began a project to upgrade the computer-based information systems on its BWR/6 and BandW Simulators. The existing display generation hardware was very old and in need of replacement due to difficulty in obtaining spare parts and technical support. In addition, the display systems used currently each require a SEL 32/55 computer system, which is also obsolete, running the Real Time Monitor (RTM) operating system. An upgrade of the display hardware and display generation systems not only solves the problem of obsolescence of that equipment but also allows removal of the 32/55 systems. These computers are used only to support the existing display generation systems. Shortly after purchase of the replacement equipment, it was learned that the vendor was no longer going to support the methodology. Instead of implementing an unsupported concept, it was decided to implement the display systems upgrades using the Picasso-3 UIMS (User Interface Management System) and the purchased hardware. This paper describes the upgraded display systems for the BWR/6 and BandW Simulators, including the design concept, display development, hardware requirements, the simulator interface software, and problems encountered. (author)

  20. An expert fitness diagnosis system based on elastic cloud computing.

    Science.gov (United States)

    Tseng, Kevin C; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  1. An Expert Fitness Diagnosis System Based on Elastic Cloud Computing

    Directory of Open Access Journals (Sweden)

    Kevin C. Tseng

    2014-01-01

    Full Text Available This paper presents an expert diagnosis system based on cloud computing. It classifies a user’s fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user’s physiological data, such as age, gender, and body mass index (BMI. In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8% and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  2. Emotion-Aware Assistive System for Humanistic Care Based on the Orange Computing Concept

    Directory of Open Access Journals (Sweden)

    Jhing-Fa Wang

    2012-01-01

    Full Text Available Mental care has become crucial with the rapid growth of economy and technology. However, recent movements, such as green technologies, place more emphasis on environmental issues than on mental care. Therefore, this study presents an emerging technology called orange computing for mental care applications. Orange computing refers to health, happiness, and physiopsychological care computing, which focuses on designing algorithms and systems for enhancing body and mind balance. The representative color of orange computing originates from a harmonic fusion of passion, love, happiness, and warmth. A case study on a human-machine interactive and assistive system for emotion care was conducted in this study to demonstrate the concept of orange computing. The system can detect emotional states of users by analyzing their facial expressions, emotional speech, and laughter in a ubiquitous environment. In addition, the system can provide corresponding feedback to users according to the results. Experimental results show that the system can achieve an accurate audiovisual recognition rate of 81.8% on average, thereby demonstrating the feasibility of the system. Compared with traditional questionnaire-based approaches, the proposed system can offer real-time analysis of emotional status more efficiently.

  3. Implementing ‘Site BIM’

    DEFF Research Database (Denmark)

    Davies, Richard; Harty, Chris

    2013-01-01

    Numerous Building Information Modelling (BIM) tools are well established and potentially beneficial in certain uses. However, issues of adoption and implementation persist, particularly for on-site use of BIM tools in the construction phase. We describe an empirical case-study of the implementation...... of an innovative ‘Site BIM’ system on a major hospital construction project. The main contractor on the project developed BIM-enabled tools to allow site workers using mobile tablet personal computers to access design information and to capture work quality and progress data on-site. Accounts show that ‘Site BIM...

  4. Realization of the computation process in the M-6000 computer for physical process automatization systems basing on CAMAC system

    International Nuclear Information System (INIS)

    Antonichev, G.M.; Vesenev, V.A.; Volkov, A.S.; Maslov, V.V.; Shilkin, I.P.; Bespalova, T.V.; Golutvin, I.A.; Nevskaya, N.A.

    1977-01-01

    Software for physical experiments using the CAMAC devices and the M-6000 computer are further developed. The construction principles and operation of the data acquisition system and the system generator are described. Using the generator for the data acquisition system the experimenter realizes the logic for data exchange between the CAMAC devices and the computer

  5. International Conference on Emerging Technologies for Information Systems, Computing, and Management

    CERN Document Server

    Ma, Tinghuai; Emerging Technologies for Information Systems, Computing, and Management

    2013-01-01

    This book aims to examine innovation in the fields of information technology, software engineering, industrial engineering, management engineering. Topics covered in this publication include; Information System Security, Privacy, Quality Assurance, High-Performance Computing and Information System Management and Integration. The book presents papers from The Second International Conference for Emerging Technologies Information Systems, Computing, and Management (ICM2012) which was held on December 1 to 2, 2012 in Hangzhou, China.

  6. The effect of switch control site on computer skills of infants and toddlers.

    Science.gov (United States)

    Glickman, L; Deitz, J; Anson, D; Stewart, K

    1996-01-01

    The purpose of this study was to determine whether switch control site (hand vs. head) affects the age at which children can successfully activate a computer to play a cause-and-effect game. The sample consisted of 72 participants randomly divided into two groups (head switch and hand switch), with stratification for gender and age (9-11 months, 12-14 months, 15-17 months). All participants were typically developing. After a maximum of 5 min of training, each participant was given five opportunities to activate a Jelly Bean switch to play a computer game. Competency was defined as four to five successful switch activations. Most participants in the 9-month to 11-month age group could successfully use a hand switch to activate a computer, and for the 15-month to 17-month age group, 100% of the participants met with success. By contrast, in the head switch condition, approximately one third of the participants in each of the three age ranges were successful in activating the computer to play a cause-and-effect game. The findings from this study provide developmental guidelines for using switches (head vs. hand) to activate computers to play cause-and-effect games and suggest that the clinician may consider introducing basic computer and switch skills to children as young as 9 months of age. However, the clinician is cautioned that the head switch may be more difficult to master than the hand switch and that additional research involving children with motor impairments is needed.

  7. Site systems engineering: Systems engineering management plan

    Energy Technology Data Exchange (ETDEWEB)

    Grygiel, M.L. [Westinghouse Hanford Co., Richland, WA (United States)

    1996-05-03

    The Site Systems Engineering Management Plan (SEMP) is the Westinghouse Hanford Company (WHC) implementation document for the Hanford Site Systems Engineering Policy, (RLPD 430.1) and Systems Engineering Criteria Document and Implementing Directive, (RLID 430.1). These documents define the US Department of Energy (DOE), Richland Operations Office (RL) processes and products to be used at Hanford to implement the systems engineering process at the site level. This SEMP describes the products being provided by the site systems engineering activity in fiscal year (FY) 1996 and the associated schedule. It also includes the procedural approach being taken by the site level systems engineering activity in the development of these products and the intended uses for the products in the integrated planning process in response to the DOE policy and implementing directives. The scope of the systems engineering process is to define a set of activities and products to be used at the site level during FY 1996 or until the successful Project Hanford Management Contractor (PHMC) is onsite as a result of contract award from Request For Proposal DE-RP06-96RL13200. Following installation of the new contractor, a long-term set of systems engineering procedures and products will be defined for management of the Hanford Project. The extent to which each project applies the systems engineering process and the specific tools used are determined by the project`s management.

  8. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  9. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  10. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and

  11. On-site voltage measurement with capacitive sensors on high voltage systems

    NARCIS (Netherlands)

    Wu, L.; Wouters, P.A.A.F.; Heesch, van E.J.M.; Steennis, E.F.

    2011-01-01

    In Extra/High-Voltage (EHV/HV) power systems, over-voltages occur e.g. due to transients or resonances. At places where no conventional voltage measurement devices can be installed, on-site measurement of these occurrences requires preferably non intrusive sensors, which can be installed with little

  12. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    Science.gov (United States)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  13. Internet resources for dentistry: computer, Internet, reference, and sites for enhancing personal productivity of the dental professional.

    Science.gov (United States)

    Guest, G F

    2000-08-15

    At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the

  14. Computer-based supervisory control and data acquisition system for the radioactive waste evaporator

    International Nuclear Information System (INIS)

    Pope, N.G.; Schreiber, S.B.; Yarbro, S.L.; Gomez, B.G.; Nekimken, H.L.; Sanchez, D.E.; Bibeau, R.A.; Macdonald, J.M.

    1994-12-01

    The evaporator process at TA-55 reduces the amount of transuranic liquid radioactive waste by separating radioactive salts from relatively low-level radioactive nitric acid solution. A computer-based supervisory control and data acquisition (SCADA) system has been installed on the process that allows the operators to easily interface with process equipment. Individual single-loop controllers in the SCADA system allow more precise process operation with less human intervention. With this system, process data can be archieved in computer files for later analysis. Data are distributed throughout the TA-55 site through a local area network so that real-time process conditions can be monitored at multiple locations. The entire system has been built using commercially available hardware and software components

  15. Developments in the JRodos decision support system for off-site nuclear emergency management and rehabilitation

    Energy Technology Data Exchange (ETDEWEB)

    Landman, Claudia [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany); Pro-Science GmbH, Ettlingen (Germany); Raskob, Wolfgang; Trybushnyi, Dmytro [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2016-07-01

    JRodos is a non-commercial computer-based decision support system for nuclear accidents. The simulation models for assessing radiological and other consequences and the system features and components allow real-time operation for off-site emergency management as well as the use as a tool for preparing exercises and pre-plannng of countermeasures. There is an active user community that takes influence on further developments.

  16. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  17. Characterizing chemical systems with on-line computers and graphics

    International Nuclear Information System (INIS)

    Frazer, J.W.; Rigdon, L.P.; Brand, H.R.; Pomernacki, C.L.

    1979-01-01

    Incorporating computers and graphics on-line to chemical experiments and processes opens up new opportunities for the study and control of complex systems. Systems having many variables can be characterized even when the variable interactions are nonlinear, and the system cannot a priori be represented by numerical methods and models. That is, large sets of accurate data can be rapidly acquired, then modeling and graphic techniques can be used to obtain partial interpretation plus design of further experimentation. The experimenter can thus comparatively quickly iterate between experimentation and modeling to obtain a final solution. We have designed and characterized a versatile computer-controlled apparatus for chemical research, which incorporates on-line instrumentation and graphics. It can be used to determine the mechanism of enzyme-induced reactions or to optimize analytical methods. The apparatus can also be operated as a pilot plant to design control strategies. On-line graphics were used to display conventional plots used by biochemists and three-dimensional response-surface plots

  18. Software quality assurance on the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Matras, J.R.

    1993-01-01

    The Yucca Mountain Site Characterization Project (YMP) has been involved over the years in the continuing struggle with establishing acceptable Software Quality Assurance (SQA) requirements for the development, modification, and acquisition of computer programs used to support the Mined Geologic Disposal System. These computer programs will be used to produce or manipulate data used directly in site characterization, design, analysis, performance assessment, and operation of repository structures, systems, and components. Scientists and engineers working on the project have claimed that the SQA requirements adopted by the project are too restrictive to allow them to perform their work. This paper will identify the source of the original SQA requirements adopted by the project. It will delineate the approach used by the project to identify concerns voiced by project engineers and scientists regarding the original SQA requirements. It will conclude with a discussion of methods used to address these problems in the rewrite of the original SQA requirements

  19. Users guide for evaluating alternative fixed-site physical protection systems using ''FESEM''

    International Nuclear Information System (INIS)

    Chapman, L.D.; Kinemond, G.A.; Sasser, D.W.

    1977-11-01

    The objective of this manual is to provide a guide for evaluating physical protection systems using the Forcible Entry Safeguards Effectiveness Model (FESEM). It is intended for use by personnel involved in evaluating fixed-site security systems, or managers involved in making decisions related to the modification of existing protection systems or the implementation of new systems. This users' guide has been written for an audience which has some previous computer experience

  20. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  1. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study

  2. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  3. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  4. Site report for the office information management conference

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.P.

    1989-12-31

    The charter of the End User Services Section is to plan and support office information systems for Savannah River Site organizations and be the first point of contact for users of Information Resource Management Department computer services. This includes personal workstation procurement, electronic mail, computer aided design, operations analysis, and access to information systems both on and off site. The mission also includes the training and support of personnel in the effective use of the new and existing systems.

  5. Site report for the office information management conference

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.P.

    1989-01-01

    The charter of the End User Services Section is to plan and support office information systems for Savannah River Site organizations and be the first point of contact for users of Information Resource Management Department computer services. This includes personal workstation procurement, electronic mail, computer aided design, operations analysis, and access to information systems both on and off site. The mission also includes the training and support of personnel in the effective use of the new and existing systems.

  6. Reactive wavepacket dynamics for four atom systems on scalable parallel computers

    International Nuclear Information System (INIS)

    Goldfield, E.M.

    1994-01-01

    While time-dependent quantum mechanics has been successfully applied to many three atom systems, it was nevertheless a computational challenge to use wavepacket methods to study four atom systems, systems with several heavy atoms, and systems with deep potential wells. S.K. Gray and the author are studying the reaction of OH + CO ↔ (HOCO) ↔ H + CO 2 , a difficult reaction by all the above criteria. Memory considerations alone made it impossible to use a single IBM RS/6000 workstation to study a four degree-of-freedom model of this system. They have developed a scalable parallel wavepacket code for the IBM SP1 and have run it on the SP1 at Argonne and at the Cornell Theory Center. The wavepacket, defined on a four dimensional grid, is spread out among the processors. Two-dimensional FFT's are used to compute the kinetic energy operator acting on the wavepacket. Accomplishing this task, which is the computationally intensive part of the calculation, requires a global transpose of the data. This transpose is the only serious communication between processors. Since the problem is essentially data-parallel, communication is regular and load-balancing is excellent. But as the problem is moderately fine-grained and messages are long, the ratio of communication to computation is somewhat high and they typically get about 55% of ideal speed-up

  7. Total System Performance Assessment for the Site Recommendation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2000-10-02

    As mandated in the Nuclear Waste Policy Act of 1982, the U.S. Department of Energy (DOE) has been investigating a candidate site at Yucca Mountain, Nevada, to determine whether it is suitable for development of the nation's first repository for permanent geologic disposal of spent nuclear fuel (SNF) and high-level radioactive waste (HLW). The Nuclear Waste Policy Amendments Act of 1987 directed that only Yucca Mountain be characterized to evaluate the site's suitability. Three main components of the DOE site characterization program are testing, design, and performance assessment. These program components consist of: Investigation of natural features and processes by analyzing data collected from field tests conducted above and below ground and from laboratory tests of rock, gas, and water samples Design of a repository and waste packages tailored to the site features, supported by laboratory testing of candidate materials for waste packages and design related testing in the underground tunnels where waste would be emplaced Quantitative estimates of the performance of the total repository system, over a range of possible conditions and for different repository configurations, by means of computer modeling techniques that are based on site and materials testing data and accepted principles of physics and chemistry. To date, DOE has completed and documented four major iterations of total system performance assessment (TSPA) for the Yucca Mountain site: TSPA-91 (Barnard et al. 1992), TSPA-93 (Wilson et al. 1994; CRWMS M and O 1994), TSPA-95 (CRWMS M and O 1995), and the Total System Performance Assessment-Viability Assessment (TSPA-VA) (DOE 1998a, Volume 3). Each successive TSPA iteration has advanced the technical understanding of the performance attributes of the natural features and processes and enhanced engineering designs. The next major iteration of TSPA is to be conducted in support of the next major programmatic milestone for the DOE, namely the

  8. The Impact of Cloud Computing on Information Systems Agility

    Directory of Open Access Journals (Sweden)

    Mohamed Sawas

    2015-09-01

    Full Text Available As businesses are encountering frequent harsh economic conditions, concepts such as outsourcing, agile and lean management, change management and cost reduction are constantly gaining more attention. This is because these concepts are all aimed at saving on budgets and facing unexpected changes. Latest technologies like cloud computing promise to turn IT, that has always been viewed as a cost centre, into a source of saving money and driving flexibility and agility to the business. The purpose of this paper is to first compile a set of attributes that govern the agility benefits added to information systems by cloud computing and then develop a survey-based instrument to measure these agility benefits. Our research analysis employs non-probability sampling based on a combination of convenience and judgment. This approach was used to obtain a representative sample of participants from potential companies belonging to various industries such as oil & gas, banking, private, government and semi-governmental organizations. This research will enable decision makers to measure agility enhancements and hence compare the agility of Information Systems before and after deploying cloud computing.

  9. Automatic data acquisition system of environmental radiation monitor with a personal computer

    International Nuclear Information System (INIS)

    Ohkubo, Tohru; Nakamura, Takashi.

    1984-05-01

    The automatic data acquisition system of environmental radiation monitor was developed in a low price by using a PET personal computer. The count pulses from eight monitors settled at four site boundaries were transmitted to a radiation control room by a signal transmission device and analyzed by the computer via 12 channel scaler and PET-CAMAC Interface for graphic display and printing. (author)

  10. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  11. Archaeology Through Computational Linguistics: Inscription Statistics Predict Excavation Sites of Indus Valley Artifacts.

    Science.gov (United States)

    Recchia, Gabriel L; Louwerse, Max M

    2016-11-01

    Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.

  12. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  13. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  14. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  15. Development of a computer-aided digital reactivity computer system for PWRs

    International Nuclear Information System (INIS)

    Chung, S.-K.; Sung, K.-Y.; Kim, D.; Cho, D.-Y.

    1993-01-01

    Reactor physics tests at initial startup and after reloading are performed to verify nuclear design and to ensure safety operation. Two kinds of reactivity computers, analog and digital, have been widely used in the pressurized water reactor (PWR) core physics test. The test data of both reactivity computers are displayed only on the strip chart recorder, and these data are managed by hand so that the accuracy of the test results depends on operator expertise and experiences. This paper describes the development of the computer-aided digital reactivity computer system (DRCS), which is enhanced by system management software and an improved system for the application of the PWR core physics test

  16. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  17. Computer Programs for Obtaining and Analyzing Daily Mean Steamflow Data from the U.S. Geological Survey National Water Information System Web Site

    Science.gov (United States)

    Granato, Gregory E.

    2009-01-01

    Research Council, 2004). The USGS maintains the National Water Information System (NWIS), a distributed network of computers and file servers used to store and retrieve hydrologic data (Mathey, 1998; U.S. Geological Survey, 2008). NWISWeb is an online version of this database that includes water data from more than 24,000 streamflow-gaging stations throughout the United States (U.S. Geological Survey, 2002, 2008). Information from NWISWeb is commonly used to characterize streamflows at gaged sites and to help predict streamflows at ungaged sites. Five computer programs were developed for obtaining and analyzing streamflow from the National Water Information System (NWISWeb). The programs were developed as part of a study by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, to develop a stochastic empirical loading and dilution model. The programs were developed because reliable, efficient, and repeatable methods are needed to access and process streamflow information and data. The first program is designed to facilitate the downloading and reformatting of NWISWeb streamflow data. The second program is designed to facilitate graphical analysis of streamflow data. The third program is designed to facilitate streamflow-record extension and augmentation to help develop long-term statistical estimates for sites with limited data. The fourth program is designed to facilitate statistical analysis of streamflow data. The fifth program is a preprocessor to create batch input files for the U.S. Environmental Protection Agency DFLOW3 program for calculating low-flow statistics. These computer programs were developed to facilitate the analysis of daily mean streamflow data for planning-level water-quality analyses but also are useful for many other applications pertaining to streamflow data and statistics. These programs and the associated documentation are included on the CD-ROM accompanying this report. This report and the appendixes on the

  18. Greenhouse gas emissions from on-site wastewater treatment systems

    Science.gov (United States)

    Somlai-Haase, Celia; Knappe, Jan; Gill, Laurence

    2016-04-01

    Nearly one third of the Irish population relies on decentralized domestic wastewater treatment systems which involve the discharge of effluent into the soil via a percolation area (drain field). In such systems, wastewater from single households is initially treated on-site either by a septic tank and an additional packaged secondary treatment unit, in which the influent organic matter is converted into carbon dioxide (CO2) and methane (CH4) by microbial mediated processes. The effluent from the tanks is released into the soil for further treatment in the unsaturated zone where additional CO2 and CH4 are emitted to the atmosphere as well as nitrous oxide (N2O) from the partial denitrification of nitrate. Hence, considering the large number of on-site systems in Ireland and internationally, these are potential significant sources of greenhouse gas (GHG) emissions, and yet have received almost no direct field measurement. Here we present the first attempt to quantify and qualify the production and emissions of GHGs from a septic tank system serving a single house in the County Westmeath, Ireland. We have sampled the water for dissolved CO2, CH4 and N2O and measured the gas flux from the water surface in the septic tank. We have also carried out long-term flux measurements of CO2 from the drain field, using an automated soil gas flux system (LI-8100A, Li-Cor®) covering a whole year semi-continuously. This has enabled the CO2 emissions from the unsaturated zone to be correlated against different meteorological parameters over an annual cycle. In addition, we have integrated an ultraportable GHG analyser (UGGA, Los Gatos Research Inc.) into the automated soil gas flux system to measure CH4 flux. Further, manual sampling has also provided a better understanding of N2O emissions from the septic tank system.

  19. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  20. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  1. Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)

    Science.gov (United States)

    Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik

    2018-04-01

    The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.

  2. The use of modern on-site bioremediation systems to reduce crude oil contamination on oilfield properties

    International Nuclear Information System (INIS)

    Hildebrandt, W.W.; Wilson, S.B.

    1991-01-01

    Oil-field properties frequently have areas in which the soil has been degraded with crude oil. Soil contaminated in this manner is often considered either a hazardous waste or designated waste under regulatory guidelines. As a result, there is often concern about an owner's liabilities and the financial institution's liabilities whenever oilfield properties are transferred to new operators, abandoned, or converted to other uses such as real estate. There is also concern about the methods and relative costs to remediate soil which has been contaminated with crude oil. Modern, well-designed, soil bioremediation systems are cost effective for the treatment of crude oil contamination, and these systems can eliminate an owner's subsequent liabilities. Compared to traditional land-farming practices, a modern on-site bioremediation system (1) requires significantly less surface area, (2) results in lower operating costs, and (3) provides more expeditious results. Compared to excavation and off-site disposal of the contaminated soil, on-site bioremediation will eliminate subsequent liabilities and is typically more cost effective. Case studies indicate that o-site bioremediation systems have been successful at reducing the crude oil contamination in soil to levels which are acceptable to regulatory agencies in less than 10 weeks. Total costs for on-site bioremediation has ranged from $35 to $40 per cubic yard of treated soil, including excavation

  3. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  4. A novel representation of inter-site tumour heterogeneity from pre-treatment computed tomography textures classifies ovarian cancers by clinical outcome

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, Hebert Alberto; Micco, Maura; Lakhman, Yulia; Meier, Andreas A.; Sosa, Ramon; Hricak, Hedvig; Sala, Evis [Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY (United States); Veeraraghavan, Harini; Deasy, Joseph [Memorial Sloan Kettering Cancer Center, Department of Medical Physics, New York, NY (United States); Nougaret, Stephanie [Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY (United States); Service de Radiologie, Institut Regional du Cancer de Montpellier, Montpellier (France); INSERM, U1194, Institut de Recherche en Cancerologie de Montpellier (IRCM), Montpellier (France); Soslow, Robert A.; Weigelt, Britta [Memorial Sloan Kettering Cancer Center, Department of Pathology, New York, NY (United States); Levine, Douglas A. [Memorial Sloan Kettering Cancer Center, Department of Surgery, New York, NY (United States); Aghajanian, Carol; Snyder, Alexandra [Memorial Sloan Kettering Cancer Center, Department of Medicine, New York, NY (United States)

    2017-09-15

    To evaluate the associations between clinical outcomes and radiomics-derived inter-site spatial heterogeneity metrics across multiple metastatic lesions on CT in patients with high-grade serous ovarian cancer (HGSOC). IRB-approved retrospective study of 38 HGSOC patients. All sites of suspected HGSOC involvement on preoperative CT were manually segmented. Gray-level correlation matrix-based textures were computed from each tumour site, and grouped into five clusters using a Gaussian Mixture Model. Pairwise inter-site similarities were computed, generating an inter-site similarity matrix (ISM). Inter-site texture heterogeneity metrics were computed from the ISM and compared to clinical outcomes. Of the 12 inter-site texture heterogeneity metrics evaluated, those capturing the differences in texture similarities across sites were associated with shorter overall survival (inter-site similarity entropy, similarity level cluster shade, and inter-site similarity level cluster prominence; p ≤ 0.05) and incomplete surgical resection (similarity level cluster shade, inter-site similarity level cluster prominence and inter-site cluster variance; p ≤ 0.05). Neither the total number of disease sites per patient nor the overall tumour volume per patient was associated with overall survival. Amplification of 19q12 involving cyclin E1 gene (CCNE1) predominantly occurred in patients with more heterogeneous inter-site textures. Quantitative metrics non-invasively capturing spatial inter-site heterogeneity may predict outcomes in patients with HGSOC. (orig.)

  5. Applications of small computers for systems control on the Tandem Mirror Experiment-Upgrade

    International Nuclear Information System (INIS)

    Bork, R.G.; Kane, R.J.; Moore, T.L.

    1983-01-01

    Desktop computers operating into a CAMAC-based interface are used to control and monitor the operation of the various subsystems on the Tandem Mirror Experiment-Upgrade (TMX-U) at Lawrence Livermore National Laboratory (LLNL). These systems include: shot sequencer/master timing, neutral beam control (four consoles), magnet power system control, ion-cyclotron resonant heating (ICRH) control, thermocouple monitoring, getter system control, gas fueling system control, and electron-cyclotron resonant heating (ECRH) monitoring. Two additional computers are used to control the TMX-U neutral beam test stand and provide computer-aided repair/test and development of CAMAC modules. These machines are usually programmed in BASIC, but some codes have been interpreted into assembly language to increase speed. Details of the computer interfaces and system complexity are described as well as the evolution of the systems to their present states

  6. The application of geological computer modelling systems to the characterisation and assessment of radioactive waste repositories

    International Nuclear Information System (INIS)

    White, M.J.; Del Olmo, C.

    1996-01-01

    The deep disposal of radioactive waste requires the collection and analysis of large amounts of geological data. These data give information on the geological and hydrogeological setting of repositories and research sites, including the geological structure and the nature of the groundwater. The collection of these data is required in order to develop an understanding of the geology and the geological evolution of sites and to provide quantitative information for performance assessments. An integrated approach to the interpretation and provision of these data is proposed in this paper, via the use of computer systems, here termed geological modelling systems. Geological modelling systems are families of software programmes which allow the incorporation of site investigation data into integrated 3D models of sub-surface geology

  7. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    Science.gov (United States)

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  8. Computer control system of TRISTAN

    International Nuclear Information System (INIS)

    Kurokawa, Shin-ichi; Shinomoto, Manabu; Kurihara, Michio; Sakai, Hiroshi.

    1984-01-01

    For the operation of a large accelerator, it is necessary to connect an enormous quantity of electro-magnets, power sources, vacuum equipment, high frequency accelerator and so on and to control them harmoniously. For the purpose, a number of computers are adopted, and connected with a network, in this way, a large computer system for laboratory automation which integrates and controls the whole system is constructed. As a distributed system of large scale, the functions such as electro-magnet control, file processing and operation control are assigned to respective computers, and the total control is made feasible by network connection, at the same time, as the interface with controlled equipment, the CAMAC (computer-aided measurement and control) is adopted to ensure the flexibility and the possibility of expansion of the system. Moreover, the language ''NODAL'' having network support function was developed so as to easily make software without considering the composition of more complex distributed system. The accelerator in the TRISTAN project is composed of an electron linear accelerator, an accumulation ring of 6 GeV and a main ring of 30 GeV. Two ring type accelerators must be synchronously operated as one body, and are controlled with one computer system. The hardware and software are outlined. (Kako, I.)

  9. Retrofitting of NPP Computer systems

    International Nuclear Information System (INIS)

    Pettersen, G.

    1994-01-01

    Retrofitting of nuclear power plant control rooms is a continuing process for most utilities. This involves introducing and/or extending computer-based solutions for surveillance and control as well as improving the human-computer interface. The paper describes typical requirements when retrofitting NPP process computer systems, and focuses on the activities of Institute for energieteknikk, OECD Halden Reactor project with respect to such retrofitting, using examples from actual delivery projects. In particular, a project carried out for Forsmarksverket in Sweden comprising upgrade of the operator system in the control rooms of units 1 and 2 is described. As many of the problems of retrofitting NPP process computer systems are similar to such work in other kinds of process industries, an example from a non-nuclear application area is also given

  10. POLYAR, a new computer program for prediction of poly(A sites in human sequences

    Directory of Open Access Journals (Sweden)

    Qamar Raheel

    2010-11-01

    Full Text Available Abstract Background mRNA polyadenylation is an essential step of pre-mRNA processing in eukaryotes. Accurate prediction of the pre-mRNA 3'-end cleavage/polyadenylation sites is important for defining the gene boundaries and understanding gene expression mechanisms. Results 28761 human mapped poly(A sites have been classified into three classes containing different known forms of polyadenylation signal (PAS or none of them (PAS-strong, PAS-weak and PAS-less, respectively and a new computer program POLYAR for the prediction of poly(A sites of each class was developed. In comparison with polya_svm (till date the most accurate computer program for prediction of poly(A sites while searching for PAS-strong poly(A sites in human sequences, POLYAR had a significantly higher prediction sensitivity (80.8% versus 65.7% and specificity (66.4% versus 51.7% However, when a similar sort of search was conducted for PAS-weak and PAS-less poly(A sites, both programs had a very low prediction accuracy, which indicates that our knowledge about factors involved in the determination of the poly(A sites is not sufficient to identify such polyadenylation regions. Conclusions We present a new classification of polyadenylation sites into three classes and a novel computer program POLYAR for prediction of poly(A sites/regions of each of the class. In tests, POLYAR shows high accuracy of prediction of the PAS-strong poly(A sites, though this program's efficiency in searching for PAS-weak and PAS-less poly(A sites is not very high but is comparable to other available programs. These findings suggest that additional characteristics of such poly(A sites remain to be elucidated. POLYAR program with a stand-alone version for downloading is available at http://cub.comsats.edu.pk/polyapredict.htm.

  11. Information system of forest growth and productivity by site quality type and elements of forest

    Science.gov (United States)

    Khlyustov, V.

    2012-04-01

    simulate forest parameters and their dynamics. The system can substitute traditional processing of forest inventory field data and provide users with detailed information on the current state of forest and give a prediction. Implementation of the proposed system in combination with high resolution remote sensing is able to increase significantly the quality of forest inventory and at the same time reduce the costs. The system is a contribution to site oriented forest management. The System is registered in the Russian State Register of Computer Programs 12.07.2011, No 2011615418.

  12. LLNL Experimental Test Site (Site 300) Potable Water System Operations Plan

    Energy Technology Data Exchange (ETDEWEB)

    Ocampo, R. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bellah, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-14

    The existing Lawrence Livermore National Laboratory (LLNL) Site 300 drinking water system operation schematic is shown in Figures 1 and 2 below. The sources of water are from two Site 300 wells (Well #18 and Well #20) and San Francisco Public Utilities Commission (SFPUC) Hetch-Hetchy water through the Thomas shaft pumping station. Currently, Well #20 with 300 gallons per minute (gpm) pump capacity is the primary source of well water used during the months of September through July, while Well #18 with 225 gpm pump capacity is the source of well water for the month of August. The well water is chlorinated using sodium hypochlorite to provide required residual chlorine throughout Site 300. Well water chlorination is covered in the Lawrence Livermore National Laboratory Experimental Test Site (Site 300) Chlorination Plan (“the Chlorination Plan”; LLNL-TR-642903; current version dated August 2013). The third source of water is the SFPUC Hetch-Hetchy Water System through the Thomas shaft facility with a 150 gpm pump capacity. At the Thomas shaft station the pumped water is treated through SFPUC-owned and operated ultraviolet (UV) reactor disinfection units on its way to Site 300. The Thomas Shaft Hetch- Hetchy water line is connected to the Site 300 water system through the line common to Well pumps #18 and #20 at valve box #1.

  13. PHENIX On-Line Distributed Computing System Architecture

    International Nuclear Information System (INIS)

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-01-01

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (''granules'') that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes

  14. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  15. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  16. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    Science.gov (United States)

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative

  17. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. CEO Sites Mission Management System (SMMS)

    Science.gov (United States)

    Trenchard, Mike

    2014-01-01

    uses the SMMS for three general functions - database queries of content and status, individual site creation and updates, and mission planning. The CEO administrator of the science site database is able to create or modify the content of sites and activate or deactivate them based on the requirements of the sponsors. The administrator supports and implements ISS mission planning by assembling, reporting, and activating mission-specific site selections for management; deactivating sites as requirements are met; and creating new sites, such as International Charter sites for disasters, as circumstances warrant. In addition to the above CEO internal uses, when site planning for a specific ISS mission is complete and approved, the SMMS can produce and export those essential site database elements for the mission into XML format for use by onboard Earth-location systems, such as Worldmap. The design, development, and implementation of the SMMS resulted in a superior database management system for CEO science sites by focusing on the functions and applications of the database alone instead of integrating the database with the multipurpose configuration of the AMPS. Unlike the AMPS, it can function and be modified within the existing Windows 7 environment. The functions and applications of the SMMS were expanded to accommodate more database elements, report products, and a streamlined interface for data entry and review. A particularly elegant enhancement in data entry was the integration of the Google Earth application for the visual display and definition of site coordinates for site areas defined by multiple coordinates. Transfer between the SMMS and Google Earth is accomplished with a Keyhole Markup Language (KML) expression of geographic data (see figures 3 and 4). Site coordinates may be entered into the SMMS panel directly for display in Google Earth, or the coordinates may be defined on the Google Earth display as a mouse-controlled polygonal definition and

  19. Introduction of library administration system using an office computer in the smallscale library

    International Nuclear Information System (INIS)

    Itabashi, Keizo; Ishikawa, Masashi

    1984-01-01

    Research Information Center was established in Fusion Research Center at Naka site as a new section of Department of Technical Information of Japan Atomic Energy Research Institute. A library materials management system utilizing an office computer was introduced to provide good services. The system is a total-system centered on services at counter except purchase business and the serviced materials are books, reports, journals and pamphlets. The system has produced good effects on many aspects, e.g. a significantly easy inventory of library materials, and complete removal of user's handwriting for borrowing materials, by using an optical chracter recognition handscanner. Those improvements have resulted in better image of the library. (author)

  20. Asymmetric exclusion processes with site sharing in a one-channel transport system

    International Nuclear Information System (INIS)

    Liu Mingzhe; Hawick, Ken; Marsland, Stephen

    2010-01-01

    This Letter investigates two-species totally asymmetric simple exclusion process (TASEP) with site sharing in a one-channel transport system. In the model, different species of particles may share the same sites, while particles of the same species may not (hard-core exclusion). The site-sharing mechanism is applied to the bulk as well as the boundaries. Such sharing mechanism within the framework of the TASEP has been largely ignored so far. The steady-state phase diagrams, currents and bulk densities are obtained using a mean-field approximation and computer simulations. The presence of three stationary phases (low-density, high-density, and maximal current) are identified. A comparison on the stationary current with the Bridge model [M.R. Evans, et al., Phys. Rev. Lett. 74 (1995) 208] has shown that our model can enhance the current. The theoretical calculations are well supported by Monte Carlo simulations.

  1. Dosicard: on-site evaluation of a new individual dosimetry system

    International Nuclear Information System (INIS)

    Delacroix, D.; Guelin, M.; Lyron, C.; Feraud, J.P.

    1995-01-01

    Dosicard is a new individual dosimetry system developed to monitor personnel working in the following fields: civil and military nuclear applications, medical environments and research centres: it can also be used to monitor mobile personnel. The system is based on the use of a credit-card sized format electronic badge. The associated computer environment enables management of the dosimetric data acquired. The characteristics of the system are presented in this paper together with an evaluation of the results of six month's use in a nuclear research centre. (author)

  2. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry; Lau, Sonie

    Intelligent computational systems can be described as an adaptive computational system integrating both traditional computational approaches and artificial intelligence (AI) methodologies to meet the science and engineering data processing requirements imposed by specific mission objectives. These systems will be capable of integrating, interpreting, and understanding sensor input information; correlating that information to the "world model" stored within its data base and understanding the differences, if any; defining, verifying, and validating a command sequence to merge the "external world" with the "internal world model"; and, controlling the vehicle and/or platform to meet the scientific and engineering mission objectives. Performance and simulation data obtained to date indicate that the current flight processors baselined for many missions such as Space Station Freedom do not have the computational power to meet the challenges of advanced automation and robotics systems envisioned for the year 2000 era. Research issues which must be addressed to achieve greater than giga-flop performance for on-board intelligent computational systems have been identified, and a technology development program has been initiated to achieve the desired long-term system performance objectives.

  3. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  4. The Administrative Impact of Computers on the British Columbia Public School System.

    Science.gov (United States)

    Gibbens, Trevor P.

    This case study analyzes and evaluates the administrative computer systems in the British Columbia public school organization in order to investigate the costs and benefits of computers, their impact on managerial work, their influence on centralization in organizations, and the relationship between organizational objectives and the design of…

  5. Computation of the Likelihood of Joint Site Frequency Spectra Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2016-02-01

    Full Text Available In population genetics, information about evolutionary forces, e.g., mutation, selection and genetic drift, is often inferred from DNA sequence information. Generally, DNA consists of two long strands of nucleotides or sites that pair via the complementary bases cytosine and guanine (C and G, on the one hand, and adenine and thymine (A and T, on the other. With whole genome sequencing, most genomic information stored in the DNA has become available for multiple individuals of one or more populations, at least in humans and model species, such as fruit flies of the genus Drosophila. In a genome-wide sample of L sites for M (haploid individuals, the state of each site may be made binary, by binning the complementary bases, e.g., C with G to C/G, and contrasting C/G to A/T, to obtain a “site frequency spectrum” (SFS. Two such samples of either a single population from different time-points or two related populations from a single time-point are called joint site frequency spectra (joint SFS. While mathematical models describing the interplay of mutation, drift and selection have been available for more than 80 years, calculation of exact likelihoods from joint SFS is difficult. Sufficient statistics for inference of, e.g., mutation or selection parameters that would make use of all the information in the genomic data are rarely available. Hence, often suites of crude summary statistics are combined in simulation-based computational approaches. In this article, we use a bi-allelic boundary-mutation and drift population genetic model to compute the transition probabilities of joint SFS using orthogonal polynomials. This allows inference of population genetic parameters, such as the mutation rate (scaled by the population size and the time separating the two samples. We apply this inference method to a population dataset of neutrally-evolving short intronic sites from six DNA sequences of the fruit fly Drosophila melanogaster and the reference

  6. The Department of Energy Nevada Test Site Remote Area Monitoring System

    International Nuclear Information System (INIS)

    Sanders, L.D.; Hart, O.F.

    1993-01-01

    The Remote Area Monitoring System was developed by Los Alamos National Laboratory (LANL) for DOE test directors at the Nevada Test Site (NTS) to verify radiological conditions are safe after a nuclear test. In the unlikely event of a venting as a result of a nuclear test, this system provides radiological and meteorological data to Weather Service Nuclear Support Office (WSNSO) computers where mesoscale models are used to predict downwind exposure rates. The system uses a combination of hardwired radiation sensors and satellite based data acquisition units with their own radiation sensors to measure exposure rates in remote areas of the NTS. The satellite based data acquisition units are available as small, Portable Remote Area Monitors (RAMs) for rapid deployment, and larger, Semipermanent RAMs that can have meteorological towers. The satellite based stations measure exposure rates and transmit measurements to the GOES (Geostationary Operational Environmental Satellite) where they are relayed to Direct Readout Ground Stations (DRGS) at the NTS and Los Alamos. Computers process the data and display results in the NTS Operations Coordination Center. Los Alamos computers and NTS computers are linked together through a wide area network, providing remote redundant system capability. Recently, LANL, expanded the system to take radiological and meteorological measurements in communities in the western United States. The system was also expanded to acquire data from Remote Automatic Weather Stations (RAWS) that transmit through GOES. The addition of Portable and Semipermanent RAMs to the system has vastly expanded monitoring capabilities at NTS and can be used to take measurements anywhere in this hemisphere

  7. Development of an international matrix-solver prediction system on a French-Japanese international grid computing environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kushida, Noriyuki; Tatekawa, Takayuki; Teshima, Naoya; Caniou, Yves; Guivarch, Ronan; Dayde, Michel; Ramet, Pierre

    2010-01-01

    The 'Research and Development of International Matrix-Solver Prediction System (REDIMPS)' project aimed at improving the TLSE sparse linear algebra expert website by establishing an international grid computing environment between Japan and France. To help users in identifying the best solver or sparse linear algebra tool for their problems, we have developed an interoperable environment between French and Japanese grid infrastructures (respectively managed by DIET and AEGIS). Two main issues were considered. The first issue is how to submit a job from DIET to AEGIS. The second issue is how to bridge the difference of security between DIET and AEGIS. To overcome these issues, we developed APIs to communicate between different grid infrastructures by improving the client API of AEGIS. By developing a server deamon program (SeD) of DIET which behaves like an AEGIS user, DIET can call functions in AEGIS: authentication, file transfer, job submission, and so on. To intensify the security, we also developed functionalities to authenticate DIET sites and DIET users in order to access AEGIS computing resources. By this study, the set of software and computers available within TLSE to find an appropriate solver is enlarged over France (DIET) and Japan (AEGIS). (author)

  8. Torness computer system turns round data

    International Nuclear Information System (INIS)

    Dowler, E.; Hamilton, J.

    1989-01-01

    The Torness nuclear power station has two advanced gas-cooled reactors. A key feature is the distributed computer system which covers both data processing and auto-control. The complete computer system has over 80 processors with 45000 digital and 22000 analogue input signals. The on-line control and monitoring systems includes operating systems, plant data acquisition and processing, alarm and event detection, communications software, process management systems and database management software. Some features of the system are described. (UK)

  9. 37Ar monitoring techniques and on-site inspection system

    International Nuclear Information System (INIS)

    Duan Rongliang; Chen Yinliang; Li Wei; Wang Hongxia; Hao Fanhua

    2001-01-01

    37 Ar is separated, purified and extracted from air sample with a low temperature gas-solid chromatographic purifying method, prepared into a radioactive measurement source and its radioactivity is measured with a proportional counter. Based on the monitoring result, a judgement can be made if an nuclear explosion event has happened recently in a spectabilis area. A series of element techniques that are associated the monitoring of the trace element 37 Ar have been investigated and developed. Those techniques include leaked gas sampling, 37 Ar separation and purification, 37 Ar radioactivity measurement and the on-site inspection of 37 Ar. An advanced 37 Ar monitoring method has been developed, with which 200 liters of air can be treated in 2 hours with sensitivity of 0.01 Bq/L for 37 Ar radioactivity measurement. A practical 37 Ar On-site Inspection system has been developed. This research work may provide technical and equipment support for the verification protection, verification supervision and CTBT verification

  10. NFDRSPC: The National Fire-Danger Rating System on a Personal Computer

    Science.gov (United States)

    Bryan G. Donaldson; James T. Paul

    1990-01-01

    This user's guide is an introductory manual for using the 1988 version (Burgan 1988) of the National Fire-Danger Rating System on an IBM PC or compatible computer. NFDRSPC is a window-oriented, interactive computer program that processes observed and forecast weather with fuels data to produce NFDRS indices. Other program features include user-designed display...

  11. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  12. Core status computing system

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki.

    1982-01-01

    Purpose: To calculate power distribution, flow rate and the like in the reactor core with high accuracy in a BWR type reactor. Constitution: Total flow rate signals, traverse incore probe (TIP) signals as the neutron detector signals, thermal power signals and pressure signals are inputted into a process computer, where the power distribution and the flow rate distribution in the reactor core are calculated. A function generator connected to the process computer calculates the absolute flow rate passing through optional fuel assemblies using, as variables, flow rate signals from the introduction part for fuel assembly flow rate signals, data signals from the introduction part for the geometrical configuration data at the flow rate measuring site of fuel assemblies, total flow rate signals for the reactor core and the signals from the process computer. Numerical values thus obtained are given to the process computer as correction signals to perform correction for the experimental data. (Moriyama, K.)

  13. Challenge of Replacing Obsolete Equipment and Systems on Brownfield Sites

    International Nuclear Information System (INIS)

    Teasdale, St.

    2009-01-01

    The Nuclear Decommissioning Authority (NDA) is responsible for the decommissioning and clean-up of the UK's civil public sector nuclear sites. One of their top priorities is the retrieval of sludge and fuel from the First Generation Magnox Fuel Storage Pond (FGMSP) at Sellafield site which is one of the most complex and compact nuclear sites in the world. The FGMSP plant is currently undergoing a series of major modifications in preparation for the retrievals operations. One of the most challenging modifications undertaken in the facility has been the Control and Surveillance Project which covered replacement of the existing Environmental Monitoring System, this presented the complex challenge of replacing an existing system whilst maintaining full functionality on a live radiological safety system with a constant radiological hazard. The Control and Surveillance Project involved the design, procurement, installation, changeover and commissioning of a new Radiological Surveillance System (alpha, beta and gamma monitoring) along with Building Evacuation Systems within the FGMSP complex to replace the existing obsolete system. This Project was a key enabler to future FGMSP retrievals and decommissioning activities. The project objective was to create and maintain a safe radiological working environment for over 450 personnel working in the plant up to 2020. The Legacy Ponds at Sellafield represent one of the biggest challenges in the civil nuclear clean up portfolio in the UK. Retrieval of sludge and fuel from the First Generation Magnox Fuel Storage Pond (FGMSP), and its safe long term storage is one of the NDA's top priorities. In June 2002 Sellafield Ltd contracted with the ACKtiv Nuclear Joint Venture to progress the risk mitigation, asset restoration and the early enabling works associated with preparation for clean up. Since then significant progress has been made in preparing the facility, and it's support systems, for the clean-up operations. This has been

  14. Droplet-counting Microtitration System for Precise On-site Analysis.

    Science.gov (United States)

    Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo

    2018-01-01

    A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.

  15. Three computer codes to read, plot, and tabulate operational test-site recorded solar data. [TAPFIL, CHPLOT, and WRTCNL codes

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, S.D.; Sampson, R.J. Jr.; Stonemetz, R.E.; Rouse, S.L.

    1980-07-01

    A computer program, TAPFIL, has been developed by MSFC to read data from an IBM 360 tape for use on the PDP 11/70. The information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites is stored on the tapes. Two other programs, CHPLOT and WRTCNL, have been developed to plot and tabulate the data. These data will be used in the evaluation of collector efficiency and solar system performance. This report describes the methodology of the programs, their inputs, and their outputs.

  16. Computational Study on Atomic Structures, Electronic Properties, and Chemical Reactions at Surfaces and Interfaces and in Biomaterials

    Science.gov (United States)

    Takano, Yu; Kobayashi, Nobuhiko; Morikawa, Yoshitada

    2018-06-01

    Through computer simulations using atomistic models, it is becoming possible to calculate the atomic structures of localized defects or dopants in semiconductors, chemically active sites in heterogeneous catalysts, nanoscale structures, and active sites in biological systems precisely. Furthermore, it is also possible to clarify physical and chemical properties possessed by these nanoscale structures such as electronic states, electronic and atomic transport properties, optical properties, and chemical reactivity. It is sometimes quite difficult to clarify these nanoscale structure-function relations experimentally and, therefore, accurate computational studies are indispensable in materials science. In this paper, we review recent studies on the relation between local structures and functions for inorganic, organic, and biological systems by using atomistic computer simulations.

  17. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  18. Summary of computational support and general documentation for computer code (GENTREE) used in Office of Nuclear Waste Isolation Pilot Salt Site Selection Project

    International Nuclear Information System (INIS)

    Beatty, J.A.; Younker, J.L.; Rousseau, W.F.; Elayat, H.A.

    1983-01-01

    A Decision Tree Computer Model was adapted for the purposes of a Pilot Salt Site Selection Project conducted by the Office of Nuclear Waste Isolation (ONWI). A deterministic computer model was developed to structure the site selection problem with submodels reflecting the five major outcome categories (Cost, Safety, Delay, Environment, Community Impact) to be evaluated in the decision process. Time-saving modifications were made in the tree code as part of the effort. In addition, format changes allowed retention of information items which are valuable in directing future research and in isolation of key variabilities in the Site Selection Decision Model. The deterministic code was linked to the modified tree code and the entire program was transferred to the ONWI-VAX computer for future use by the ONWI project

  19. System transient response to loss of off-site power

    International Nuclear Information System (INIS)

    Sozer, A.

    1990-01-01

    A simultaneous trip of the reactor, main circulation pumps, secondary coolant pumps, and pressurizer pump due to loss of off-site power at the High Flux Isotope Reactor (HFIR) located at the Oak Ridge National Laboratory (ORNL) has been analyzed to estimate available safety margin. A computer model based on the Modular Modeling System code has been used to calculate the transient response of the system. The reactor depressurizes from 482.7 psia down to about 23 psia in about 50 seconds and remains stable thereafter. Available safety margin has been estimated in terms of the incipient boiling heat flux ratio. It is a conservative estimate due to assumed less than available primary and secondary flows and higher than normal depressurization rate. The ratio indicates no incipient boiling conditions at the hot spot. No potential damage to the fuel is likely to occur during this transient. 2 refs., 6 figs

  20. A MULTICORE COMPUTER SYSTEM FOR DESIGN OF STREAM CIPHERS BASED ON RANDOM FEEDBACK

    Directory of Open Access Journals (Sweden)

    Borislav BEDZHEV

    2013-01-01

    Full Text Available The stream ciphers are an important tool for providing information security in the present communication and computer networks. Due to this reason our paper describes a multicore computer system for design of stream ciphers based on the so - named random feedback shift registers (RFSRs. The interest to this theme is inspired by the following facts. First, the RFSRs are a relatively new type of stream ciphers which demonstrate a significant enhancement of the crypto - resistance in a comparison with the classical stream ciphers. Second, the studding of the features of the RFSRs is in very initial stage. Third, the theory of the RFSRs seems to be very hard, which leads to the necessity RFSRs to be explored mainly by the means of computer models. The paper is organized as follows. First, the basics of the RFSRs are recalled. After that, our multicore computer system for design of stream ciphers based on RFSRs is presented. Finally, the advantages and possible areas of application of the computer system are discussed.

  1. Computer-Assisted English Learning System Based on Free Conversation by Topic

    Science.gov (United States)

    Choi, Sung-Kwon; Kwon, Oh-Woog; Kim, Young-Kil

    2017-01-01

    This paper aims to describe a computer-assisted English learning system using chatbots and dialogue systems, which allow free conversation outside the topic without limiting the learner's flow of conversation. The evaluation was conducted by 20 experimenters. The performance of the system based on a free conversation by topic was measured by the…

  2. Hanford Site Emergency Alerting System siren testing report

    International Nuclear Information System (INIS)

    Weidner, L.B.

    1997-01-01

    The purpose of the test was to determine the effective coverage of the proposed upgrades to the existing Hanford Site Emergency Alerting System (HSEAS). The upgrades are to enhance the existing HSEAS along the Columbia River from the Vernita Bridge to the White Bluffs Boat Launch as well as install a new alerting system in the 400 Area on the Hanford Site. Five siren sites along the Columbia River and two sites in the 400 Area were tested to determine the site locations that will provide the desired coverage

  3. Hanford Site Tank Waste Remediation System

    International Nuclear Information System (INIS)

    1993-05-01

    The US Department of Energy's (DOE) Hanford Site in southeastern Washington State has the most diverse and largest amount of highly radioactive waste of any site in the US. High-level radioactive waste has been stored in large underground tanks since 1944. A Tank Waste Remediation System Program has been established within the DOE to safely manage and immobilize these wastes in anticipation of permanent disposal in a geologic repository. The Hanford Site Tank Waste Remediation System Waste Management 1993 Symposium Papers and Viewgraphs covered the following topics: Hanford Site Tank Waste Remediation System Overview; Tank Waste Retrieval Issues and Options for their Resolution; Tank Waste Pretreatment - Issues, Alternatives and Strategies for Resolution; Low-Level Waste Disposal - Grout Issue and Alternative Waste Form Technology; A Strategy for Resolving High-Priority Hanford Site Radioactive Waste Storage Tank Safety Issues; Tank Waste Chemistry - A New Understanding of Waste Aging; Recent Results from Characterization of Ferrocyanide Wastes at the Hanford Site; Resolving the Safety Issue for Radioactive Waste Tanks with High Organic Content; Technology to Support Hanford Site Tank Waste Remediation System Objectives

  4. SITE-2, Power Plant Siting, Cost, Environment, Seismic and Meteorological Effects

    International Nuclear Information System (INIS)

    Frigerio, N.A.; Habegger, L.J.; King, R.F.; Hoover, L.J.; Clark, N.A.; Cobian, J.M.

    1977-01-01

    1 - Description of problem or function: SITE2 is designed to (1) screen candidate energy facility sites or areas within an electric utility region, based on the region's physical and socioeconomic attributes, the planned facility's characteristics, and impact assessments, and (2) evaluate the cumulative regional impacts associated with alternate energy supply options and inter-regional energy import/export practices, specifically, comparison of different energy technologies and their regional distribution in clustered or dispersed patterns. 2 - Method of solution: The SITE2 methodology is based on the quantification of three major site-related vectors. A cost vector is determined which identifies site-specific costs, such as transmission costs, cooling costs as related to water availability, and costs of specific controls needed to protect the surrounding environment. An impact vector is also computed for each potential site, using models of health and environmental impacts incurred in areas adjacent to the site. Finally, a site attribute vector is developed which reflects such characteristics as population, seismic conditions, meteorology, land use, and local ecological systems. This vector can be used to eliminate certain sites because of their inability to satisfy specific constraints. These three vectors can be displayed as density maps and combined in a simple overlay approach, similar to that developed by I. L. McHarg in reference 2, to identify candidate sites. Alternatively, the vector elements can be computationally combined into a weighted sum to obtain quantitative indicators of site suitability

  5. SITE-2, Power Plant Siting, Cost, Environment, Seismic and Meteorological Effects

    Energy Technology Data Exchange (ETDEWEB)

    Frigerio, N A [Environmental Impact Studies, Argonne National Laboratory 9700 South Cass Avenue, Argonne, Illinois 60439 (United States); Habegger, L J; King, R F; Hoover, L J [Energy and Environmental Systems Division, Argonne National Laboratory 9700 South Cass Avenue, Argonne, Illinois 60439 (United States); Clark, N A [Applied Mathematics Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States); Cobian, J M [Northwestern University, Evanston, Illinois 60201 (United States)

    1977-08-01

    1 - Description of problem or function: SITE2 is designed to (1) screen candidate energy facility sites or areas within an electric utility region, based on the region's physical and socioeconomic attributes, the planned facility's characteristics, and impact assessments, and (2) evaluate the cumulative regional impacts associated with alternate energy supply options and inter-regional energy import/export practices, specifically, comparison of different energy technologies and their regional distribution in clustered or dispersed patterns. 2 - Method of solution: The SITE2 methodology is based on the quantification of three major site-related vectors. A cost vector is determined which identifies site-specific costs, such as transmission costs, cooling costs as related to water availability, and costs of specific controls needed to protect the surrounding environment. An impact vector is also computed for each potential site, using models of health and environmental impacts incurred in areas adjacent to the site. Finally, a site attribute vector is developed which reflects such characteristics as population, seismic conditions, meteorology, land use, and local ecological systems. This vector can be used to eliminate certain sites because of their inability to satisfy specific constraints. These three vectors can be displayed as density maps and combined in a simple overlay approach, similar to that developed by I. L. McHarg in reference 2, to identify candidate sites. Alternatively, the vector elements can be computationally combined into a weighted sum to obtain quantitative indicators of site suitability.

  6. New computer systems

    International Nuclear Information System (INIS)

    Faerber, G.

    1975-01-01

    Process computers have already become indespensable technical aids for monitoring and automation tasks in nuclear power stations. Yet there are still some problems connected with their use whose elimination should be the main objective in the development of new computer systems. In the paper, some of these problems are summarized, new tendencies in hardware development are outlined, and finally some new systems concepts made possible by the hardware development are explained. (orig./AK) [de

  7. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  8. Grid site testing for ATLAS with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, J; Hönig, F; Legger, F; LLamas, R Medrano; Sciacca, F G; Ster, D van der

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  9. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  10. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2013-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  11. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  12. The Application of Computer-Aided Discovery to Spacecraft Site Selection

    Science.gov (United States)

    Pankratius, V.; Blair, D. M.; Gowanlock, M.; Herring, T.

    2015-12-01

    The selection of landing and exploration sites for interplanetary robotic or human missions is a complex task. Historically it has been labor-intensive, with large groups of scientists manually interpreting a planetary surface across a variety of datasets to identify potential sites based on science and engineering constraints. This search process can be lengthy, and excellent sites may get overlooked when the aggregate value of site selection criteria is non-obvious or non-intuitive. As planetary data collection leads to Big Data repositories and a growing set of selection criteria, scientists will face a combinatorial search space explosion that requires scalable, automated assistance. We are currently exploring more general computer-aided discovery techniques in the context of planetary surface deformation phenomena that can lend themselves to application in the landing site search problem. In particular, we are developing a general software framework that addresses key difficulties: characterizing a given phenomenon or site based on data gathered from multiple instruments (e.g. radar interferometry, gravity, thermal maps, or GPS time series), and examining a variety of possible workflows whose individual configurations are optimized to isolate different features. The framework allows algorithmic pipelines and hypothesized models to be perturbed or permuted automatically within well-defined bounds established by the scientist. For example, even simple choices for outlier and noise handling or data interpolation can drastically affect the detectability of certain features. These techniques aim to automate repetitive tasks that scientists routinely perform in exploratory analysis, and make them more efficient and scalable by executing them in parallel in the cloud. We also explore ways in which machine learning can be combined with human feedback to prune the search space and converge to desirable results. Acknowledgements: We acknowledge support from NASA AIST

  13. Introduction of library administration system using an office computer in the smallscale library

    Energy Technology Data Exchange (ETDEWEB)

    Itabashi, Keizo; Ishikawa, Masashi

    1984-01-01

    Research Information Center was established in Fusion Research Center at Naka site as a new section of Department of Technical Information of Japan Atomic Energy Research Institute. A library materials management system utilizing an office computer was introduced to provide good services. The system is a total-system centered on services at counter except purchase business and the serviced materials are books, reports, journals and pamphlets. The system has produced good effects on many aspects, e.g. a significantly easy inventory of library materials, and complete removal of user's handwriting for borrowing materials, by using an optical chracter recognition handscanner. Those improvements have resulted in better image of the library.

  14. The practical use of computer graphics techniques for site characterization

    International Nuclear Information System (INIS)

    Tencer, B.; Newell, J.C.

    1982-01-01

    In this paper the authors describe the approach utilized by Roy F. Weston, Inc. (WESTON) to analyze and characterize data relative to a specific site and the computerized graphical techniques developed to display site characterization data. These techniques reduce massive amounts of tabular data to a limited number of graphics easily understood by both the public and policy level decision makers. First, they describe the general design of the system; then the application of this system to a low level rad site followed by a description of an application to an uncontrolled hazardous waste site

  15. Computer-based control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Kalashnikov, V.K.; Shugam, R.A.; Ol'shevsky, Yu.N.

    1975-01-01

    Computer-based control systems of nuclear power plants may be classified into those using computers for data acquisition only, those using computers for data acquisition and data processing, and those using computers for process control. In the present paper a brief review is given of the functions the systems above mentioned perform, their applications in different nuclear power plants, and some of their characteristics. The trend towards hierarchic systems using control computers with reserves already becomes clear when consideration is made of the control systems applied in the Canadian nuclear power plants that pertain to the first ones equipped with process computers. The control system being now under development for the large Soviet reactors of WWER type will also be based on the use of control computers. That part of the system concerned with controlling the reactor assembly is described in detail

  16. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  17. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  18. Belle computing system

    International Nuclear Information System (INIS)

    Adachi, Ichiro; Hibino, Taisuke; Hinz, Luc; Itoh, Ryosuke; Katayama, Nobu; Nishida, Shohei; Ronga, Frederic; Tsukamoto, Toshifumi; Yokoyama, Masahiko

    2004-01-01

    We describe the present status of the computing system in the Belle experiment at the KEKB e+e- asymmetric-energy collider. So far, we have logged more than 160fb-1 of data, corresponding to the world's largest data sample of 170M BB-bar pairs at the -bar (4S) energy region. A large amount of event data has to be processed to produce an analysis event sample in a timely fashion. In addition, Monte Carlo events have to be created to control systematic errors accurately. This requires stable and efficient usage of computing resources. Here, we review our computing model and then describe how we efficiently proceed DST/MC productions in our system

  19. A data acquisition system based on a personal computer

    International Nuclear Information System (INIS)

    Omata, K.; Fujita, Y.; Yoshikawa, N.; Sekiguchi, M.; Shida, Y.

    1991-07-01

    A versatile and flexible data acquisition system KODAQ (Kakuken Online Data AcQuisition system) has been developed. The system runs with CAMAC and a most popular Japanese personal computer, PC9801 (NEC), similar to the IBM PC/AT. The system is designed to set up easily a data acquisition system for various kinds of nuclear-physics experiments. (author)

  20. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  1. The Northeast Utilities generic plant computer system

    International Nuclear Information System (INIS)

    Spitzner, K.J.

    1980-01-01

    A variety of computer manufacturers' equipment monitors plant systems in Northeast Utilities' (NU) nuclear and fossil power plants. The hardware configuration and the application software in each of these systems are essentially one of a kind. Over the next few years these computer systems will be replaced by the NU Generic System, whose prototype is under development now for Millstone III, an 1150 Mwe Pressurized Water Reactor plant being constructed in Waterford, Connecticut. This paper discusses the Millstone III computer system design, concentrating on the special problems inherent in a distributed system configuration such as this. (auth)

  2. Dynamic simulation of hvdc transmission systems on digital computers

    Energy Technology Data Exchange (ETDEWEB)

    Hingorani, N G; Hay, J L; Crosbie, R E

    1966-05-01

    A digital computer technique is based on the fact that the operation of an hvdc converter consists of similar consecutive processes, each process having features which are common to all processes. Each bridge converter of an hvdc system is represented by a central process, and repetitive use of the latter simulates continuous converter operation. This technique may be employed to obtain the waveforms of transient or steady state voltages and currents anywhere in the dc system. To illustrate the method, an hvdc link is considered; the link which connects two independent ac systems conprises two converters with their control systems, and a dc transmission line. As an example, the transient behavior of the system is examined following changes in the current settings of the control system.

  3. 2nd International Doctoral Symposium on Applied Computation and Security Systems

    CERN Document Server

    Cortesi, Agostino; Saeed, Khalid; Chaki, Nabendu

    2016-01-01

    The book contains the extended version of the works that have been presented and discussed in the Second International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2015) held during May 23-25, 2015 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland; Ca’ Foscari University, Venice, Italy and University of Calcutta, India. The book is divided into volumes and presents dissertation works in the areas of Image Processing, Biometrics-based Authentication, Soft Computing, Data Mining, Next Generation Networking and Network Security, Remote Healthcare, Communications, Embedded Systems, Software Engineering and Service Engineering.

  4. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  5. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  6. Framework for computer-aided systems design

    International Nuclear Information System (INIS)

    Esselman, W.H.

    1992-01-01

    Advanced computer technology, analytical methods, graphics capabilities, and expert systems contribute to significant changes in the design process. Continued progress is expected. Achieving the ultimate benefits of these computer-based design tools depends on successful research and development on a number of key issues. A fundamental understanding of the design process is a prerequisite to developing these computer-based tools. In this paper a hierarchical systems design approach is described, and methods by which computers can assist the designer are examined. A framework is presented for developing computer-based design tools for power plant design. These tools include expert experience bases, tutorials, aids in decision making, and tools to develop the requirements, constraints, and interactions among subsystems and components. Early consideration of the functional tasks is encouraged. Methods of acquiring an expert's experience base is a fundamental research problem. Computer-based guidance should be provided in a manner that supports the creativity, heuristic approaches, decision making, and meticulousness of a good designer

  7. Expert-systems and computer-based industrial systems

    International Nuclear Information System (INIS)

    Terrien, J.F.

    1987-01-01

    Framatome makes wide use of expert systems, computer-assisted engineering, production management and personnel training. It has set up separate business units and subsidiaries and also participates in other companies which have the relevant expertise. Five examples of the products and services available in these are discussed. These are in the field of applied artificial intelligence and expert-systems, in integrated computer-aid design and engineering, structural analysis, computer-related products and services and document management systems. The structure of the companies involved and the work they are doing is discussed. (UK)

  8. Computer versus paper system for recognition and management of sepsis in surgical intensive care.

    Science.gov (United States)

    Croft, Chasen A; Moore, Frederick A; Efron, Philip A; Marker, Peggy S; Gabrielli, Andrea; Westhoff, Lynn S; Lottenberg, Lawrence; Jordan, Janeen; Klink, Victoria; Sailors, R Matthew; McKinley, Bruce A

    2014-02-01

    A system to provide surveillance, diagnosis, and protocolized management of surgical intensive care unit (SICU) sepsis was undertaken as a performance improvement project. A system for sepsis management was implemented for SICU patients using paper followed by a computerized system. The hypothesis was that the computerized system would be associated with improved process and outcomes. A system was designed to provide early recognition and guide patient-specific management of sepsis including (1) modified early warning signs-sepsis recognition score (MEWS-SRS; summative point score of ranges of vital signs, mental status, white blood cell count; after every 4 hours) by bedside nurse; (2) suspected site assessment (vascular access, lung, abdomen, urinary tract, soft tissue, other) at bedside by physician or extender; (3) sepsis management protocol (replicable, point-of-care decisions) at bedside by nurse, physician, and extender. The system was implemented first using paper and then a computerized system. Sepsis severity was defined using standard criteria. In January to May 2012, a paper system was used to manage 77 consecutive sepsis encounters (3.9 ± 0.5 cases per week) in 65 patients (77% male; age, 53 ± 2 years). In June to December 2012, a computerized system was used to manage 132 consecutive sepsis encounters (4.4 ± 0.4 cases per week) in 119 patients (63% male; age, 58 ± 2 years). MEWS-SRS elicited 683 site assessments, and 201 had sepsis diagnosis and protocol management. The predominant site of infection was abdomen (paper, 58%; computer, 53%). Recognition of early sepsis tended to occur more using the computerized system (paper, 23%; computer, 35%). Hospital mortality rate for surgical ICU sepsis (paper, 20%; computer, 14%) was less with the computerized system. A computerized sepsis management system improves care process and outcome. Early sepsis is recognized and managed with greater frequency compared with severe sepsis or septic shock. The system

  9. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  10. LANL environmental restoration site ranking system: System description. Final report

    International Nuclear Information System (INIS)

    Merkhofer, L.; Kann, A.; Voth, M.

    1992-01-01

    The basic structure of the LANL Environmental Restoration (ER) Site Ranking System and its use are described in this document. A related document, Instructions for Generating Inputs for the LANL ER Site Ranking System, contains detailed descriptions of the methods by which necessary inputs for the system will be generated. LANL has long recognized the need to provide a consistent basis for comparing the risks and other adverse consequences associated with the various waste problems at the Lab. The LANL ER Site Ranking System is being developed to help address this need. The specific purpose of the system is to help improve, defend, and explain prioritization decisions at the Potential Release Site (PRS) and Operable Unit (OU) level. The precise relationship of the Site Ranking System to the planning and overall budget processes is yet to be determined, as the system is still evolving. Generally speaking, the Site Ranking System will be used as a decision aid. That is, the system will be used to aid in the planning and budgetary decision-making process. It will never be used alone to make decisions. Like all models, the system can provide only a partial and approximate accounting of the factors important to budget and planning decisions. Decision makers at LANL will have to consider factors outside of the formal system when making final choices. Some of these other factors are regulatory requirements, DOE policy, and public concern. The main value of the site ranking system, therefore, is not the precise numbers it generates, but rather the general insights it provides

  11. LANL environmental restoration site ranking system: System description. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Merkhofer, L.; Kann, A.; Voth, M. [Applied Decision Analysis, Inc., Menlo Park, CA (United States)

    1992-10-13

    The basic structure of the LANL Environmental Restoration (ER) Site Ranking System and its use are described in this document. A related document, Instructions for Generating Inputs for the LANL ER Site Ranking System, contains detailed descriptions of the methods by which necessary inputs for the system will be generated. LANL has long recognized the need to provide a consistent basis for comparing the risks and other adverse consequences associated with the various waste problems at the Lab. The LANL ER Site Ranking System is being developed to help address this need. The specific purpose of the system is to help improve, defend, and explain prioritization decisions at the Potential Release Site (PRS) and Operable Unit (OU) level. The precise relationship of the Site Ranking System to the planning and overall budget processes is yet to be determined, as the system is still evolving. Generally speaking, the Site Ranking System will be used as a decision aid. That is, the system will be used to aid in the planning and budgetary decision-making process. It will never be used alone to make decisions. Like all models, the system can provide only a partial and approximate accounting of the factors important to budget and planning decisions. Decision makers at LANL will have to consider factors outside of the formal system when making final choices. Some of these other factors are regulatory requirements, DOE policy, and public concern. The main value of the site ranking system, therefore, is not the precise numbers it generates, but rather the general insights it provides.

  12. The feasibility of using computer graphics in environmental evaluations : interim report, documenting historic site locations using computer graphics.

    Science.gov (United States)

    1981-01-01

    This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...

  13. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  14. Site-specific dissociation dynamics of H2/D2 on Ag(111) and Co(0001) and the validity of the site-averaging model

    International Nuclear Information System (INIS)

    Hu, Xixi; Jiang, Bin; Xie, Daiqian; Guo, Hua

    2015-01-01

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies

  15. Study on irradiation effects of nucleus electromagnetic pulse on single chip computer system

    International Nuclear Information System (INIS)

    Hou Minsheng; Liu Shanghe; Wang Shuping

    2001-01-01

    Intense electromagnetic pulse, namely nucleus electromagnetic pulse (NEMP), lightning electromagnetic pulse (LEMP) and high power microwave (HPM), can disturb and destroy the single chip computer system. To study this issue, the authors made irradiation experiments by NEMPs generated by gigahertz transversal electromagnetic (GTEM) Cell. The experiments show that shutdown, restarting, communication errors of the single chip microcomputer system would occur when it was irradiated by the NEMPs. Based on the experiments, the cause on the effects on the single chip microcomputer system is discussed

  16. Gamma spectrometric system based on the personal computer Pravetz-83

    International Nuclear Information System (INIS)

    Yanakiev, K; Grigorov, T.; Vuchkov, M.

    1985-01-01

    A gamma spectrometric system based on a personal microcomputer Pravets-85 is described. The analog modules are NIM standard. ADC data are stored in the memory of the computer via a DMA channel and a real-time data processing is possible. The results from a series of tests indicate that the performance of the system is comparable with that of comercially avalable computerized spectrometers Ortec and Canberra

  17. Development of the JFT-2M data analysis software system on the mainframe computer

    International Nuclear Information System (INIS)

    Matsuda, Toshiaki; Amagai, Akira; Suda, Shuji; Maemura, Katsumi; Hata, Ken-ichiro.

    1990-11-01

    We developed software system on the FACOM mainframe computer to analyze JFT-2M experimental data archived by JFT-2M data acquisition system. Then we can reduce and distribute the CPU load of the data acquisition system. And we can analyze JFT-2M experimental data by using complicated computational code with raw data, such as equilibrium calculation and transport analysis, and useful software package like SAS statistic package on the mainframe. (author)

  18. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  19. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  20. Computer system for nuclear power plant parameter display

    International Nuclear Information System (INIS)

    Stritar, A.; Klobuchar, M.

    1990-01-01

    The computer system for efficient, cheap and simple presentation of data on the screen of the personal computer is described. The display is in alphanumerical or graphical form. The system can be used for the man-machine interface in the process monitoring system of the nuclear power plant. It represents the third level of the new process computer system of the Nuclear Power Plant Krsko. (author)

  1. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  2. USDA soil classification system dictates site surface management

    International Nuclear Information System (INIS)

    Bowmer, W.J.

    1985-01-01

    Success or failure of site surface management practices greatly affects long-term site stability. The US Department of Agriculture (USDA) soil classification system best documents those parameters which control the success of installed practices for managing both erosion and surface drainage. The USDA system concentrates on soil characteristics in the upper three meters of the surface that support the associated flora both physically and physiologically. The USDA soil survey first identifies soil series based on detailed characteristics that are related to production potential. Using the production potential, land use capability classes are developed. Capability classes reveal the highest and best agronomic use for the site. Lower number classes are considered arable while higher number classes are best suited for grazing agriculture. Application of ecological principles based on the USDA soil survey reveals the current state of the site relative to its ecological potential. To assure success, site management practices must be chosen that are compatible with both production capability and current state of the site

  3. 4th INNS Symposia Series on Computational Intelligence in Information Systems

    CERN Document Server

    Au, Thien

    2015-01-01

    This book constitutes the refereed proceedings of the Fourth International Neural Network Symposia series on Computational Intelligence in Information Systems, INNS-CIIS 2014, held in Bandar Seri Begawan, Brunei in November 2014. INNS-CIIS aims to provide a platform for researchers to exchange the latest ideas and present the most current research advances in general areas related to computational intelligence and its applications in various domains. The 34 revised full papers presented in this book have been carefully reviewed and selected from 72 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.  

  4. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  5. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  6. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  7. The interactive on-site inspection system: An information management system to support arms control inspections

    Energy Technology Data Exchange (ETDEWEB)

    DeLand, S.M.; Widney, T.W.; Horak, K.E.; Caudell, R.B.; Grose, E.M.

    1996-12-01

    The increasing use of on-site inspection (OSI) to meet the nation`s obligations with recently signed treaties requires the nation to manage a variety of inspection requirements. This document describes a prototype automated system to assist in the preparation and management of these inspections.

  8. Job monitoring on DIRAC for Belle II distributed computing

    Science.gov (United States)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  9. Computation of Asteroid Proper Elements on the Grid

    Science.gov (United States)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  10. Measuring the impact of different brands of computer systems on the clinical consultation: a pilot study

    Directory of Open Access Journals (Sweden)

    Charlotte Refsum

    2008-07-01

    Conclusion This methodological development improves the reliability of our method for measuring the impact of different computer systems on the GP consultation. UAR added more objectivity to the observationof doctor_computer interactions. If larger studies were to reproduce the differences between computer systems demonstrated in this pilot it might be possible to make objective comparisons between systems.

  11. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Bhaskar, M; Panigrahi, Bijaya; Das, Swagatam

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems (ICAIECES -2015) held at Velammal Engineering College (VEC), Chennai, India during 22 – 23 April 2015. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing and Power Technologies.

  12. Plant computer system in nuclear power station

    International Nuclear Information System (INIS)

    Kato, Shinji; Fukuchi, Hiroshi

    1991-01-01

    In nuclear power stations, centrally concentrated monitoring system has been adopted, and in central control rooms, large quantity of information and operational equipments concentrate, therefore, those become the important place of communication between plants and operators. Further recently, due to the increase of the unit capacity, the strengthening of safety, the problems of man-machine interface and so on, it has become important to concentrate information, to automate machinery and equipment and to simplify them for improving the operational environment, reliability and so on. On the relation of nuclear power stations and computer system, to which attention has been paid recently as the man-machine interface, the example in Tsuruga Power Station, Japan Atomic Power Co. is shown. No.2 plant in the Tsuruga Power Station is a PWR plant with 1160 MWe output, which is a home built standardized plant, accordingly the computer system adopted here is explained. The fundamental concept of the central control board, the process computer system, the design policy, basic system configuration, reliability and maintenance, CRT display, and the computer system for No.1 BWR 357 MW plant are reported. (K.I.)

  13. Research on integrated simulation of fluid-structure system by computation science techniques

    International Nuclear Information System (INIS)

    Yamaguchi, Akira

    1996-01-01

    In Power Reactor and Nuclear Fuel Development Corporation, the research on the integrated simulation of fluid-structure system by computation science techniques has been carried out, and by its achievement, the verification of plant systems which has depended on large scale experiments is substituted by computation science techniques, in this way, it has been aimed at to reduce development costs and to attain the optimization of FBR systems. For the purpose, it is necessary to establish the technology for integrally and accurately analyzing complicated phenomena (simulation technology), the technology for applying it to large scale problems (speed increasing technology), and the technology for assuring the reliability of the results of analysis when simulation technology is utilized for the permission and approval of FBRs (verifying technology). The simulation of fluid-structure interaction, the heat flow simulation in the space with complicated form and the related technologies are explained. As the utilization of computation science techniques, the elucidation of phenomena by numerical experiment and the numerical simulation as the substitute for tests are discussed. (K.I.)

  14. Computer system operation

    International Nuclear Information System (INIS)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A.

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new

  15. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  16. Application of the Integrated Site and Environment Data Management System for LILW Disposal Site

    International Nuclear Information System (INIS)

    Lee, Ji Hoon; Lee, Eun Yong; Kim, Chang Lak

    2007-01-01

    During the last five years, Site Information and Total Environmental data management System(SITES) has been developed. SITES is an integrated program for overall data acquisition, environmental monitoring, and safety analysis. SITES is composed of three main modules, such as site database system (SECURE), safety assessment system (SAINT) and environmental monitoring system (SUDAL). In general, for the safe management of radioactive waste repository, the information of site environment should be collected and managed systematically from the initial site survey. For this, SECURE module manages its data for the site characterization, environmental information, and radioactive environmental information etc. The purpose of SAINT module is to apply and analyze the data from SECURE. SUDAL is developed for environmental monitoring of the radioactive waste repository. Separately, it is ready to open to the public for offering partial information

  17. Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children

    Science.gov (United States)

    Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.

    2008-03-01

    Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.

  18. Comparison and Evaluation of Large-Scale and On-Site Recycling Systems for Food Waste via Life Cycle Cost Analysis

    Directory of Open Access Journals (Sweden)

    Kyoung Hee Lee

    2017-11-01

    Full Text Available The purpose of this study was to evaluate the cost-benefit of on-site food waste recycling system using Life-Cycle Cost analysis, and to compare with large-scale treatment system. For accurate evaluation, the cost-benefit analysis was conducted with respect to local governments and residents, and qualitative environmental improvement effects were quantified. As for the local governments, analysis results showed that, when large-scale treatment system was replaced with on-site recycling system, there was significant cost reduction from the initial stage depending on reduction of investment, maintenance, and food wastewater treatment costs. As for the residents, it was found that the cost incurred from using the on-site recycling system was larger than the cost of using large-scale treatment system due to the cost of producing and installing the on-site treatment facilities at the initial stage. However, analysis showed that with continuous benefits such as greenhouse gas emission reduction, compost utilization, and food wastewater reduction, cost reduction would be obtained after 6 years of operating the on-site recycling system. Therefore, it was recommended for local governments and residents to consider introducing an on-site food waste recycling system if they are to replace an old treatment system or need to establish a new one.

  19. Artificial Intelligence Support for Landing Site Selection on Mars

    Science.gov (United States)

    Rongier, G.; Pankratius, V.

    2017-12-01

    Mars is a key target for planetary exploration; a better understanding of its evolution and habitability requires roving in situ. Landing site selection is becoming more challenging for scientists as new instruments generate higher data volumes. The involved engineering and scientific constraints make site selection and the anticipation of possible onsite actions into a complex optimization problem: there may be multiple acceptable solutions depending on various goals and assumptions. Solutions must also account for missing data, errors, and potential biases. To address these problems, we propose an AI-informed decision support system that allows scientists, mission designers, engineers, and committees to explore alternative site selection choices based on data. In particular, we demonstrate first results of an exploratory case study using fuzzy logic and a simulation of a rover's mobility map based on the fast marching algorithm. Our system computes favorability maps of the entire planet to facilitate landing site selection and allows a definition of different configurations for rovers, science target priorities, landing ellipses, and other constraints. For a rover similar to NASA's Mars 2020 rover, we present results in form of a site favorability map as well as four derived exploration scenarios that depend on different prioritized scientific targets, all visualizing inherent tradeoffs. Our method uses the NASA PDS Geosciences Node and the NASA/ICA Integrated Database of Planetary Features. Under common assumptions, the data products reveal Eastern Margaritifer Terra and Meridiani Planum to be the most favorable sites due to a high concentration of scientific targets and a flat, easily navigable surface. Our method also allows mission designers to investigate which constraints have the highest impact on the mission exploration potential and to change parameter ranges. Increasing the elevation limit for landing, for example, provides access to many additional

  20. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    Energy Technology Data Exchange (ETDEWEB)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V. [Institute of Informatics Problems, Russian Academy of Sciences (Russian Federation); Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S. [Telecommunication Systems Department, Peoples’ Friendship University of Russia (Russian Federation)

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  1. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    International Nuclear Information System (INIS)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-01-01

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures

  2. 2 December 2003: Registration of Computers Mandatory for the entire CERN Site

    CERN Multimedia

    2003-01-01

    Following the decision by the CERN Management Board (see Weekly Bulletin 38/2003), registration of all computers connected to CERN's network will be enforced and only registered computers will be allowed network access. The implementation has been put into place in the IT buildings, building 40 and the Prévessin site, and will cover the whole of CERN by 2 December 2003. We therefore recommend strongly that you register all your computers in CERN's network database including all network access cards (Ethernet AND wire-less) as soon as possible without waiting for the access restriction to take force. This will allow you accessing the network without interruption and help IT service providers to contact you in case of problems (security problems, viruses, etc.). - If you have a CERN NICE/mail computing account register at: http://cern.ch/register/ (CERN Intranet page) - If you don't have CERN NICE/mail computing account (e.g. short term visitors) register at: http://cern.ch/registerVisitorComputer/...

  3. Searching your site's management information systems

    International Nuclear Information System (INIS)

    Marquez, W.; Rollin, C.

    1994-01-01

    The Department of Energy's guidelines for the Baseline Environmental Management Report (BEMR) encourage the use of existing data when compiling information. Specific systems mentioned include the Progress Tracking System, the Mixed-Waste Inventory Report, the Waste Management Information System, DOE 4700.1-related systems, Programmatic Environmental Impact Statement (PEIS) data, and existing Work Breakdown Structures. In addition to these DOE-Headquarters tracking and reporting systems, there are a number of site systems that will be relied upon to produce the BEMR, including: (1) site management control and cost tracking systems; (2) commitment/issues tracking systems; (3) program-specific internal tracking systems; (4) Site material/equipment inventory systems. New requirements have often prompted the creation of new, customized tracking systems. This is a very time and money consuming process. As the BEMR Management Plan emphasizes, an effort should be made to use the information in existing tracking systems. Because of the wealth of information currently available from in-place systems, development of a new tracking system should be a last resort

  4. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  5. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  6. Systems analysis and the computer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, A S

    1983-08-01

    The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.

  7. The computational challenges of Earth-system science.

    Science.gov (United States)

    O'Neill, Alan; Steenman-Clark, Lois

    2002-06-15

    The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.

  8. Acquisition Information Management system telecommunication site survey results

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A. [Oak Ridge National Lab., TN (United States); Key, B.G. [COR, Inc., Oak Ridge, TN (United States)

    1993-09-01

    The Army acquisition community currently uses a dedicated, point-to-point secure computer network for the Army Material Plan Modernization (AMPMOD). It must transition to the DOD supplied Defense Secure Network 1 (DSNET1). This is one of the first networks of this size to begin the transition. The type and amount of computing resources available at individual sites may or may not meet the new network requirements. This task surveys these existing telecommunications resources available in the Army acquisition community. It documents existing communication equipment, computer hardware, associated software, and recommends appropriate changes.

  9. Wearable computing from modeling to implementation of wearable systems based on body sensor networks

    CERN Document Server

    Fortino, Giancarlo; Galzarano, Stefano

    2018-01-01

    This book provides the most up-to-date research and development on wearable computing, wireless body sensor networks, wearable systems integrated with mobile computing, wireless networking and cloud computing. This book has a specific focus on advanced methods for programming Body Sensor Networks (BSNs) based on the reference SPINE project. It features an on-line website (http://spine.deis.unical.it) to support readers in developing their own BSN application/systems and covers new emerging topics on BSNs such as collaborative BSNs, BSN design methods, autonomic BSNs, integration of BSNs and pervasive environments, and integration of BSNs with cloud computing. The book provides a description of real BSN prototypes with the possibility to see on-line demos and download the software to test them on specific sensor platforms and includes case studies for more practical applications. * Provides a future roadmap by learning advanced technology and open research issues * Gathers the background knowledge to tackl...

  10. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels......Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  11. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  12. [A computer-aided image diagnosis and study system].

    Science.gov (United States)

    Li, Zhangyong; Xie, Zhengxiang

    2004-08-01

    The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.

  13. Computer simulations of rare earth sites in glass: experimental tests and applications to laser materials

    International Nuclear Information System (INIS)

    Weber, M.J.

    1984-11-01

    Computer simulations of the microscopic structure of BeF 2 glasses using molecular dynamics are reviewed and compared with x-ray and neutron diffraction, EXAFS, NMR, and optical measurements. Unique information about the site-to-site variations in the local environments of rare earth ions is obtained using optical selective excitation and laser-induced fluorescence line-narrowing techniques. Applications and limitations of computer simulations to the development of laser glasses and to predictions of other static and dynamic properties of glasses are discussed. 35 references, 2 figures, 2 tables

  14. Tests of Cloud Computing and Storage System features for use in H1 Collaboration Data Preservation model

    International Nuclear Information System (INIS)

    Łobodziński, Bogdan

    2011-01-01

    Based on the currently developing strategy for data preservation and long-term analysis in HEP tests of possible future Cloud Computing based on the Eucalyptus Private Cloud platform and the petabyte scale storage open source system CEPH were performed for the H1 Collaboration. Improvements in computing power and strong development of storage systems suggests that a single Cloud Computing resource supported on a given site will be sufficient for analysis requirements beyond the end-date of experiments. This work describes our test-bed architecture which could be applied to fulfill the requirements of the physics program of H1 after the end date of the Collaboration. We discuss the reasons why we choose the Eucalyptus platform and CEPH storage infrastructure as well as our experience with installations and support of these infrastructures. Using our first test results we will examine performance characteristics, noticed failure states, deficiencies, bottlenecks and scaling boundaries.

  15. Shift in the microbial ecology of a hospital hot water system following the introduction of an on-site monochloramine disinfection system.

    Science.gov (United States)

    Baron, Julianne L; Vikram, Amit; Duda, Scott; Stout, Janet E; Bibby, Kyle

    2014-01-01

    Drinking water distribution systems, including premise plumbing, contain a diverse microbiological community that may include opportunistic pathogens. On-site supplemental disinfection systems have been proposed as a control method for opportunistic pathogens in premise plumbing. The majority of on-site disinfection systems to date have been installed in hospitals due to the high concentration of opportunistic pathogen susceptible occupants. The installation of on-site supplemental disinfection systems in hospitals allows for evaluation of the impact of on-site disinfection systems on drinking water system microbial ecology prior to widespread application. This study evaluated the impact of supplemental monochloramine on the microbial ecology of a hospital's hot water system. Samples were taken three months and immediately prior to monochloramine treatment and monthly for the first six months of treatment, and all samples were subjected to high throughput Illumina 16S rRNA region sequencing. The microbial community composition of monochloramine treated samples was dramatically different than the baseline months. There was an immediate shift towards decreased relative abundance of Betaproteobacteria, and increased relative abundance of Firmicutes, Alphaproteobacteria, Gammaproteobacteria, Cyanobacteria and Actinobacteria. Following treatment, microbial populations grouped by sampling location rather than sampling time. Over the course of treatment the relative abundance of certain genera containing opportunistic pathogens and genera containing denitrifying bacteria increased. The results demonstrate the driving influence of supplemental disinfection on premise plumbing microbial ecology and suggest the value of further investigation into the overall effects of premise plumbing disinfection strategies on microbial ecology and not solely specific target microorganisms.

  16. Shift in the microbial ecology of a hospital hot water system following the introduction of an on-site monochloramine disinfection system.

    Directory of Open Access Journals (Sweden)

    Julianne L Baron

    Full Text Available Drinking water distribution systems, including premise plumbing, contain a diverse microbiological community that may include opportunistic pathogens. On-site supplemental disinfection systems have been proposed as a control method for opportunistic pathogens in premise plumbing. The majority of on-site disinfection systems to date have been installed in hospitals due to the high concentration of opportunistic pathogen susceptible occupants. The installation of on-site supplemental disinfection systems in hospitals allows for evaluation of the impact of on-site disinfection systems on drinking water system microbial ecology prior to widespread application. This study evaluated the impact of supplemental monochloramine on the microbial ecology of a hospital's hot water system. Samples were taken three months and immediately prior to monochloramine treatment and monthly for the first six months of treatment, and all samples were subjected to high throughput Illumina 16S rRNA region sequencing. The microbial community composition of monochloramine treated samples was dramatically different than the baseline months. There was an immediate shift towards decreased relative abundance of Betaproteobacteria, and increased relative abundance of Firmicutes, Alphaproteobacteria, Gammaproteobacteria, Cyanobacteria and Actinobacteria. Following treatment, microbial populations grouped by sampling location rather than sampling time. Over the course of treatment the relative abundance of certain genera containing opportunistic pathogens and genera containing denitrifying bacteria increased. The results demonstrate the driving influence of supplemental disinfection on premise plumbing microbial ecology and suggest the value of further investigation into the overall effects of premise plumbing disinfection strategies on microbial ecology and not solely specific target microorganisms.

  17. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  18. Morphologic features of puncture sites after exoseal vascular closure device implantation: Changes on follow-up computed tomography

    International Nuclear Information System (INIS)

    Ryu, Hwa Seong; Jang, Joo Yeon; Kim, Tae Un; Lee, Jun Woo; Park, Jung Hwan; Choo, Ki Seok; Cho, Mong; Yoon, Ki Tae; Hong, Young Ki; Jeon, Ung Bae

    2017-01-01

    The study aimed to evaluate the morphologic changes in transarterial chemoembolization (TACE) puncture sites implanted with an ExoSeal vascular closure device (VCD) using follow-up computed tomography (CT). 16 patients who used ExoSeal VCD after TACE were enrolled. Using CT images, the diameters and anterior wall thicknesses of the puncture sites in the common femoral artery (CFA) were compared with those of the contralateral CFA before TACE, at 1 month after every TACE session, and at the final follow-up period. The rates of complications were also evaluated. There were no puncture- or VCD-related complications. Follow-up CT images of the CFA's of patients who used ExoSeal VCDs showed eccentric vascular wall thickening with soft-tissue densities considered to be hemostatic plugs. Final follow-up CT images (mean, 616 days; range, 95–1106 days) revealed partial or complete resorption of the hemostatic plugs. The CFA puncture site diameters did not differ statistically from those of the contralateral CFA on the final follow-up CT (p > 0.05), regardless of the number of VCDs used. Follow-up CT images of patients who used ExoSeal VCDs showed no significant vascular stenosis or significant vessel wall thickening

  19. Morphologic features of puncture sites after exoseal vascular closure device implantation: Changes on follow-up computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Hwa Seong; Jang, Joo Yeon; Kim, Tae Un; Lee, Jun Woo; Park, Jung Hwan; Choo, Ki Seok; Cho, Mong; Yoon, Ki Tae; Hong, Young Ki; Jeon, Ung Bae [Pusan National University Yangsan Hospital, Yangsan (Korea, Republic of)

    2017-05-15

    The study aimed to evaluate the morphologic changes in transarterial chemoembolization (TACE) puncture sites implanted with an ExoSeal vascular closure device (VCD) using follow-up computed tomography (CT). 16 patients who used ExoSeal VCD after TACE were enrolled. Using CT images, the diameters and anterior wall thicknesses of the puncture sites in the common femoral artery (CFA) were compared with those of the contralateral CFA before TACE, at 1 month after every TACE session, and at the final follow-up period. The rates of complications were also evaluated. There were no puncture- or VCD-related complications. Follow-up CT images of the CFA's of patients who used ExoSeal VCDs showed eccentric vascular wall thickening with soft-tissue densities considered to be hemostatic plugs. Final follow-up CT images (mean, 616 days; range, 95–1106 days) revealed partial or complete resorption of the hemostatic plugs. The CFA puncture site diameters did not differ statistically from those of the contralateral CFA on the final follow-up CT (p > 0.05), regardless of the number of VCDs used. Follow-up CT images of patients who used ExoSeal VCDs showed no significant vascular stenosis or significant vessel wall thickening.

  20. Computer System Analysis for Decommissioning Management of Nuclear Reactor

    International Nuclear Information System (INIS)

    Nurokhim; Sumarbagiono

    2008-01-01

    Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)

  1. On-line computer system applied in a nuclear chemistry laboratory

    International Nuclear Information System (INIS)

    Banasik, Z.; Kierzek, J.; Parus, J.; Zoltowski, T.; Zalewski, J.

    1980-01-01

    A PDP-11/45 based computer system used in a radioanalytical chemical laboratory is described. It is mainly concerned with spectrometry of ionizing radiation and remote measurement of physico-chemical properties. The objectives in mind when constructing the hardware inter-connections and developing the software of the system were to minimize the work of the electronics and computer personnel and to provide maximum flexibility for the users. For the hardware interfacing, 3 categories of equipment are used: - LPS-11 Laboratory Peripheral System - CAMAC system with CA11F-P controller - interfaces from instrument manufacturers. Flexible operation has been achieved by using a 3-level programming structure: - data transfer by assembly language programs - data formatting using bit operations in FORTRAN - data evaluation by procedures written in FORTRAN. (Auth.)

  2. SUPPORT OF NEW COMPUTER HARDWARE AT LUCH'S MC and A SYSTEM: PROBLEMS AND A SOLUTION

    International Nuclear Information System (INIS)

    Fedoseev, Victor; Shanin, Oleg

    2009-01-01

    Microsoft Windows NT 4.0 operating system is the only software product certified in Russia for using in MC and A systems. In the paper a solution for allowing the installation of this outdated operating system on new computers is discussed. The solution has been successfully tested and has been in use at Luch's network since March 2008. Furthermore, it is being recommended for other Russian enterprises for the same purpose. Introduction Typically, the software part of a nuclear material control and accounting (MC and A) system consists of an operating system (OS), database management systems (DBMS), accounting program itself and database of nuclear materials. Russian regulations require the operating system and database for MC and A be certified for information security, and the whole system must pass an accreditation. Historically, the only certified operating system for MC and A still continues to be Microsoft Windows NT 4.0 Server/Workstation. Attempts to certify newer versions of Windows failed. Luch, like most other Russian sites, uses Microsoft Windows NT 4.0 and SQL Server 6.5. Luch's specialists have developed an application (LuchMAS) for accounting purposes. Starting from about 2004, some problems appeared in Luch's accounting system. They were related to the complexity of installing Windows NT 4.0 on new computers. At first, it was possible to solve the problem choosing computer equipment that is compatible with Windows NT 4.0 or selecting certain operating system settings. Over time, the problem worsened and now it is almost impossible to install Windows NT 4.0 on new computers. The reason is the lack of hardware drivers in the outdated operating system. The problem was serious enough that it could have affected the long-term sustainability of Luch's MC and A system if adequate alternate measures were not developed.

  3. On line and on paper: Visual representations, visual culture, and computer graphics in design engineering

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, K.

    1991-01-01

    The research presented examines the visual communication practices of engineers and the impact of the implementation of computer graphics on their visual culture. The study is based on participant observation of day-to-day practices in two contemporary industrial settings among engineers engaged in the actual process of designing new pieces of technology. In addition, over thirty interviews were conducted at other industrial sites to confirm that the findings were not an isolated phenomenon. The data show that there is no one best way' to use a computer graphics system, but rather that use is site specific and firms and individuals engage in mixed paper and electronic practices as well as differential use of electronic options to get the job done. This research illustrates that rigid models which assume a linear theory of innovation, projecting a straight-forward process from idea, to drawing, to prototype, to production, are seriously misguided.

  4. Storing files in a parallel computing system based on user-specified parser function

    Science.gov (United States)

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  5. Site-specific dissociation dynamics of H{sub 2}/D{sub 2} on Ag(111) and Co(0001) and the validity of the site-averaging model

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xixi [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Jiang, Bin [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Xie, Daiqian, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Guo, Hua, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2015-09-21

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies.

  6. Automatic continuous monitoring system for dangerous sites and cargoes

    International Nuclear Information System (INIS)

    Smirnov, S.N.

    2009-01-01

    The problems of creation of automatic comprehensive continuous monitoring system for nuclear and radiation sites and cargoes of Rosatom Corporation, which carries out data collecting, processing, storage and transmission, including informational support to decision-making, as well as support to modelling and forecasting functions, are considered. The system includes components of two levels: site and industry. Currently the system is used to monitor over 8000 integrated parameters, which characterise the status of nuclear and radiation safety on Rosatom sites, environmental and fire safety

  7. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  8. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...... propose an interesting application to formalisation of hybrid systems. We obtain some class of hybrid systems, which trajectories are computable in the sense of computable analysis. This research was supported in part by the RFBR (grants N 99-01-00485, N 00-01- 00810) and by the Siberian Branch of RAS (a...... grant for young researchers, 2000)...

  9. 8th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Yang, Chin-Yu; Lin, Chun-Wei; Pan, Jeng-Shyang; Snasel, Vaclav; Abraham, Ajith

    2015-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2014, the 8th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Nanchang Institute of Technology in China, Kaohsiung University of Applied Science in Taiwan, and VSB-Technical University of Ostrava. ICGEC 2014 is held from 18-20 October 2014 in Nanchang, China. Nanchang is one of is the capital of Jiangxi Province in southeastern China, located in the north-central portion of the province. As it is bounded on the west by the Jiuling Mountains, and on the east by Poyang Lake, it is famous for its scenery, rich history and cultural sites. Because of its central location relative to the Yangtze and Pearl River Delta regions, it is a major railroad hub in Southern China. The conference is intended as an international forum for the researchers and professionals in all areas of genetic and evolutionary computing.

  10. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  11. A SURVEY ON UBIQUITOUS COMPUTING

    Directory of Open Access Journals (Sweden)

    Vishal Meshram

    2016-01-01

    Full Text Available This work presents a survey of ubiquitous computing research which is the emerging domain that implements communication technologies into day-to-day life activities. This research paper provides a classification of the research areas on the ubiquitous computing paradigm. In this paper, we present common architecture principles of ubiquitous systems and analyze important aspects in context-aware ubiquitous systems. In addition, this research work presents a novel architecture of ubiquitous computing system and a survey of sensors needed for applications in ubiquitous computing. The goals of this research work are three-fold: i serve as a guideline for researchers who are new to ubiquitous computing and want to contribute to this research area, ii provide a novel system architecture for ubiquitous computing system, and iii provides further research directions required into quality-of-service assurance of ubiquitous computing.

  12. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    OpenAIRE

    Junping Dong; Qingyu Xiong; Junhao Wen; Peng Li

    2014-01-01

    Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functi...

  13. Computer-Mediated Communication Systems

    Directory of Open Access Journals (Sweden)

    Bin Yu

    2011-10-01

    Full Text Available The essence of communication is to exchange and share information. Computers provide a new medium to human communication. CMC system, composed of human and computers, absorbs and then extends the advantages of all former formats of communication, embracing the instant interaction of oral communication, the abstract logics of printing dissemination, and the vivid images of movie and television. It also creates a series of new communication formats, such as Hyper Text, Multimedia etc. which are the information organizing methods, and cross-space message delivering patterns. Benefiting from the continuous development of technique and mechanism, the computer-mediated communication makes the dream of transmitting information cross space and time become true, which will definitely have a great impact on our social lives.

  14. Visualization system on ITBL

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2004-01-01

    Visualization systems PATRAS/ITBL and AVS/ITBL, which are based on visualization software PATRAS and AVS/Express respectively, have been developed on a global, heterogeneous computing environment, Information Technology Based Laboratory (ITBL). PATRAS/ITBL allows for real-time visualization of the numerical results acquired from coupled multi-physics numerical simulations, executed on different hosts situated in remote locations. AVS/ITBL allow for post processing visualization. The scientific data located in remote sites may be selected and visualized on a web browser installed in a user terminal. The global structure and main functions of these systems are presented. (author)

  15. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  16. Surface system Forsmark. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Lindborg, Tobias

    2008-12-01

    SKB has undertaken site characterization of two different areas, Forsmark and Laxemar-Simpevarp, in order to find a suitable location for a geological repository for spent nuclear fuel. This report focuses on the site descriptive modelling of the surface system at Forsmark. The characterization of the surface system at the site was primarily made by identifying and describing important properties in different parts of the surface system, properties concerning e.g. hydrology and climate, Quaternary deposits and soils, hydrochemistry, vegetation, ecosystem functions, but also current and historical land use. The report presents available input data, methodology for data evaluation and modelling, and resulting models for each of the different disciplines. Results from the modelling of the surface system are also integrated with results from modelling of the deep bedrock system. The Forsmark site is located within the municipality of Oesthammar, about 120 km north of Stockholm. The investigated area is located along the shoreline of Oeregrundsgrepen, a funnel-shaped bay of the Baltic Sea. The area is characterized by small-scale topographic variations and is almost entirely located at altitudes lower than 20 metres above sea level. The Quaternary deposits in the area are dominated by till, characterized by a rich content of calcite which was transported by the glacier ice to the area from the sedimentary bedrock of Gaevlebukten about 100 km north of Forsmark. As a result, the surface waters and shallow groundwater at Forsmark are characterized by high pH values and high concentrations of certain major constituents, especially calcium and bicarbonate. The annual precipitation and runoff are 560 and 150 mm, respectively. The lakes are small and shallow, with mean and maximum depths ranging from approximately 0.1 to 1 m and 0.4 to 2 m. Sea water flows into the most low-lying lakes during events giving rise to very high sea levels. Wetlands are frequent and cover 25 to 35

  17. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  18. Research on the Teaching System of the University Computer Foundation

    Directory of Open Access Journals (Sweden)

    Ji Xiaoyun

    2016-01-01

    Full Text Available Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic programming course, promote the cultivation of university students in the computer foundation, thinking methods and innovative practice ability, achieve the goal of individualized educ the College of computer basic course teaching, the specific circumstances of the need for students, professiation.

  19. System-level tools and reconfigurable computing for next-generation HWIL systems

    Science.gov (United States)

    Stark, Derek; McAulay, Derek; Cantle, Allan J.; Devlin, Malachy

    2001-08-01

    Previous work has been presented on the creation of computing architectures called DIME, which addressed the particular computing demands of hardware in the loop systems. These demands include low latency, high data rates and interfacing. While it is essential to have a capable platform for handling and processing of the data streams, the tools must also complement this so that a system's engineer is able to construct their final system. The paper will present the work in the area of integration of system level design tools, such as MATLAB and SIMULINK, with a reconfigurable computing platform. This will demonstrate how algorithms can be implemented and simulated in a familiar rapid application development environment before they are automatically transposed for downloading directly to the computing platform. This complements the established control tools, which handle the configuration and control of the processing systems leading to a tool suite for system development and implementation. As the development tools have evolved the core-processing platform has also been enhanced. These improved platforms are based on dynamically reconfigurable computing, utilizing FPGA technologies, and parallel processing methods that more than double the performance and data bandwidth capabilities. This offers support for the processing of images in Infrared Scene Projectors with 1024 X 1024 resolutions at 400 Hz frame rates. The processing elements will be using the latest generation of FPGAs, which implies that the presented systems will be rated in terms of Tera (1012) operations per second.

  20. Evaluation of Approaches for Managing Nitrate Loading from On-Site Wastewater Systems near La Pine, Oregon

    Science.gov (United States)

    Morgan, David S.; Hinkle, Stephen R.; Weick, Rodney J.

    2007-01-01

    This report presents the results of a study by the U.S. Geological Survey, done in cooperation with the Oregon Department of Environmental Quality and Deschutes County, to develop a better understanding of the effects of nitrogen from on-site wastewater disposal systems on the quality of ground water near La Pine in southern Deschutes County and northern Klamath County, Oregon. Simulation models were used to test the conceptual understanding of the system and were coupled with optimization methods to develop the Nitrate Loading Management Model, a decision-support tool that can be used to efficiently evaluate alternative approaches for managing nitrate loading from on-site wastewater systems. The conceptual model of the system is based on geologic, hydrologic, and geochemical data collected for this study, as well as previous hydrogeologic and water quality studies and field testing of on-site wastewater systems in the area by other agencies. On-site wastewater systems are the only significant source of anthropogenic nitrogen to shallow ground water in the study area. Between 1960 and 2005 estimated nitrate loading from on-site wastewater systems increased from 3,900 to 91,000 pounds of nitrogen per year. When all remaining lots are developed (in 2019 at current building rates), nitrate loading is projected to reach nearly 150,000 pounds of nitrogen per year. Low recharge rates (2-3 inches per year) and ground-water flow velocities generally have limited the extent of nitrate occurrence to discrete plumes within 20-30 feet of the water table; however, hydraulic-gradient and age data indicate that, given sufficient time and additional loading, nitrate will migrate to depths where many domestic wells currently obtain water. In 2000, nitrate concentrations greater than 4 milligrams nitrogen per liter (mg N/L) were detected in 10 percent of domestic wells sampled by Oregon Department of Environmental Quality. Numerical simulation models were constructed at transect (2

  1. Lawrence Livermore National Laboratory Experimental Test Site (Site 300) Potable Water System Operations Plan

    Energy Technology Data Exchange (ETDEWEB)

    Ocampo, Ruben P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bellah, Wendy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-04

    The existing Lawrence Livermore National Laboratory (LLNL) Site 300 drinking water system operation schematic is shown in Figures 1 and 2 below. The sources of water are from two Site 300 wells (Well #18 and Well #20) and San Francisco Public Utilities Commission (SFPUC) Hetch-Hetchy water through the Thomas shaft pumping station. Currently, Well #20 with 300 gallons per minute (gpm) pump capacity is the primary source of well water used during the months of September through July, while Well #18 with 225 gpm pump capacity is the source of well water for the month of August. The well water is chlorinated using sodium hypochlorite to provide required residual chlorine throughout Site 300. Well water chlorination is covered in the Lawrence Livermore National Laboratory Experimental Test Site (Site 300) Chlorination Plan (“the Chlorination Plan”; LLNL-TR-642903; current version dated August 2013). The third source of water is the SFPUC Hetch-Hetchy Water System through the Thomas shaft facility with a 150 gpm pump capacity. At the Thomas shaft station the pumped water is treated through SFPUC-owned and operated ultraviolet (UV) reactor disinfection units on its way to Site 300. The Thomas Shaft Hetch- Hetchy water line is connected to the Site 300 water system through the line common to Well pumps #18 and #20 at valve box #1.

  2. Preventive maintenance for computer systems - concepts & issues ...

    African Journals Online (AJOL)

    Performing preventive maintenance activities for the computer is not optional. The computer is a sensitive and delicate device that needs adequate time and attention to make it work properly. In this paper, the concept and issues on how to prolong the life span of the system, that is, the way to make the system last long and ...

  3. Computation of asteroid proper elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković B.

    2009-01-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  4. Computation of Asteroid Proper Elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković, B.

    2009-12-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  5. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B; Baranovski, A; Diesburg, M; Garzoglio, G; Mhashilkar, P; Kurca, T

    2008-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  6. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B.; Baranovski, A.; Diesburg, M.; Garzoglio, G.; Kurca, T.; Mhashilkar, P.

    2007-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  7. Interactive Computer-Enhanced Remote Viewing System (ICERVS)

    International Nuclear Information System (INIS)

    1993-08-01

    The Integrated Computer-Enhanced Remote Viewing System (ICERVS) supports the robotic remediation of hazardous environments such as underground storage tanks, buried waste sites, and contaminated production facilities. The success of these remediation missions will depend on reliable geometric descriptions of the work environment in order to achieve effective task planning, path planning, and collision avoidance. ICERVS provides a means for deriving a reliable geometric description more effectively and efficiently than current systems by combining a number of technologies: Sensing of the environment to acquire dimensional and material property data; integration of acquired data into a common data structure (based on octree technology); presentation of data to robotic task planners for analysis and visualization; interactive synthesis of geometric/surface models to denote features of interest in the environment and transfer of this information to robot control and collision avoidance systems. A key feature of ICERVS is that it will enable an operator to match xyz data from a sensor with surface models of the same region in space. This capability will help operators to better manage the complexities of task and path planning in three-dimensional (3D) space, thereby leading to safer and more effective remediation. The Phase 1 work performed by MTI has brought the ICERVS design to Maturity Level 3, Subscale Major Subsystem, and met the established success criteria

  8. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  9. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    Directory of Open Access Journals (Sweden)

    Sergio Ortega

    2010-12-01

    Full Text Available This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes.

  10. IDEA system - a new computer-based expert system for incorporation monitoring

    International Nuclear Information System (INIS)

    Doerfel, H.

    2007-01-01

    Recently, at the Karlsruhe Research Centre, a computer-based expert system, Internal Dose Equivalent Assessment System (IDEA System), has been developed for assisting dosimetrists in applying the relevant recommendations and guidelines for internal dosimetry. The expert system gives guidance to the user with respect to: (a) planning of monitoring, (b) performing routine and special monitoring, and (c) evaluation of primary monitoring results. The evaluation is done according to the IDEA System guidelines (Doerfel, H. et al., General guidelines for the estimation of committed effective dose from incorporation monitoring data. Research Report FZKA 7243, Research Center Karlsruhe, Karlsruhe (2006). ISSN 0947-8260.) in a three-stage procedure according to the expected level of exposure. At the first level the evaluation is performed with default or site-specific parameter values, at the second level case-specific parameter values are applied and at the third level a special evaluation is performed with individual adjustment of model parameter values. With these well-defined procedures the expert system follows the aim, in which all recommendations and guidelines are applied properly and the results in terms of committed effective and organ doses are close to the best estimate. (author)

  11. LHCb: The LHCb off-Site HLT Farm Demonstration

    CERN Multimedia

    Liu, Guoming

    2012-01-01

    The LHCb High Level Trigger (HLT) farm consists of about 1300 nodes, which are housed in the underground server room of the experiment point. Due to the constraints of the power supply and cooling system, it is difficult to install more servers in this room for the future. Off-site computing farm is a solution to enlarge the computing capacity. In this paper, we will demonstrate the LHCb off-site HLT farm which locate in the CERN computing center. Since we use private IP addresses for the HLT farm, we would need virtual private network (VPN) to bridge both sites. There are two kinds of traffic in the event builder: control traffic for the control and monitoring of the farm and the Data Acquisition (DAQ) traffic. We adopt IP tunnel for the control traffic and Network Address Translate (NAT) for the DAQ traffic. The performance of the off-site farm have been tested and compared with the on-site farm. The effect of the network latency has been studied. To employ a large off-site farm, one of the potential bottle...

  12. Operating System Concepts for Reconfigurable Computing: Review and Survey

    Directory of Open Access Journals (Sweden)

    Marcel Eckert

    2016-01-01

    Full Text Available One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The article also presents an overview on published and available operating systems targeting the area of reconfigurable computing. The purpose of this article is to identify and summarize common patterns among those systems that can be seen as de facto standard. Furthermore, open problems, not covered by these already available systems, are identified.

  13. EXPERIENCE OF USING CLOUD COMPUTING IN NETWORK PRODUCTS FOR SCHOOL EDUCATION

    Directory of Open Access Journals (Sweden)

    L. Sokolova

    2011-05-01

    Full Text Available We study data on the use of sites in the middle grades, secondary school, and their influence on the formation of information culture of students and their level of training. Sites use a technology called "cloud computing in Google, accessible from any internet-connected computer and do not require the use of resources of the computer itself. Sites are devoid of any advertising, does not require periodic backup, protection and general operation of the system administrator. This simplifies their use in the educational process for schools of different levels. A statistical analysis of the site was done, identified the main trends of their use.

  14. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Vijayakumar, K; Panigrahi, Bijaya; Das, Swagatam

    2017-01-01

    The volume is a collection of high-quality peer-reviewed research papers presented in the International Conference on Artificial Intelligence and Evolutionary Computation in Engineering Systems (ICAIECES 2016) held at SRM University, Chennai, Tamilnadu, India. This conference is an international forum for industry professionals and researchers to deliberate and state their research findings, discuss the latest advancements and explore the future directions in the emerging areas of engineering and technology. The book presents original work and novel ideas, information, techniques and applications in the field of communication, computing and power technologies.

  15. Computational Identification of Protein Pupylation Sites by Using Profile-Based Composition of k-Spaced Amino Acid Pairs.

    Directory of Open Access Journals (Sweden)

    Md Mehedi Hasan

    Full Text Available Prokaryotic proteins are regulated by pupylation, a type of post-translational modification that contributes to cellular function in bacterial organisms. In pupylation process, the prokaryotic ubiquitin-like protein (Pup tagging is functionally analogous to ubiquitination in order to tag target proteins for proteasomal degradation. To date, several experimental methods have been developed to identify pupylated proteins and their pupylation sites, but these experimental methods are generally laborious and costly. Therefore, computational methods that can accurately predict potential pupylation sites based on protein sequence information are highly desirable. In this paper, a novel predictor termed as pbPUP has been developed for accurate prediction of pupylation sites. In particular, a sophisticated sequence encoding scheme [i.e. the profile-based composition of k-spaced amino acid pairs (pbCKSAAP] is used to represent the sequence patterns and evolutionary information of the sequence fragments surrounding pupylation sites. Then, a Support Vector Machine (SVM classifier is trained using the pbCKSAAP encoding scheme. The final pbPUP predictor achieves an AUC value of 0.849 in 10-fold cross-validation tests and outperforms other existing predictors on a comprehensive independent test dataset. The proposed method is anticipated to be a helpful computational resource for the prediction of pupylation sites. The web server and curated datasets in this study are freely available at http://protein.cau.edu.cn/pbPUP/.

  16. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. An on-line data acquisition system based on Norsk-Data ND-560 computer

    International Nuclear Information System (INIS)

    Bandyopadhyay, A.; Roy, A.; Dey, S.K.; Bhattacharya, S.; Bhowmik, R.K.

    1987-01-01

    This paper describes a high-speed data acquisition system based on CAMAC for Norsk Data ND-560 computer operating in a multiuser environment. As opposed to the present trend, the system has been implemented with minimum hardware at CAMAC level taking advantage of the dual processors of ND-560. The package consists of several coordinated tasks running in the two CPUs which acquire data, record on tape, permit on-line analysis and display the data and perform related control operations. It has been used in several experiments at VECC and its performance in on-line experiments is reported. (orig.)

  18. A Pharmacy Computer System

    OpenAIRE

    Claudia CIULCA-VLADAIA; Călin MUNTEAN

    2009-01-01

    Objective: Describing a model of evaluation seen from a customer’s point of view for the current needed pharmacy computer system. Data Sources: literature research, ATTOFARM, WINFARM P.N.S., NETFARM, Info World - PHARMACY MANAGER and HIPOCRATE FARMACIE. Study Selection: Five Pharmacy Computer Systems were selected due to their high rates of implementing at a national level. We used the new criteria recommended by EUROREC Institute in EHR that modifies the model of data exchanges between the E...

  19. 3rd International Doctoral Symposium on Applied Computation and Security Systems

    CERN Document Server

    Saeed, Khalid; Cortesi, Agostino; Chaki, Nabendu

    2017-01-01

    This book presents extended versions of papers originally presented and discussed at the 3rd International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2016) held from August 12 to 14, 2016 in Kolkata, India. The symposium was jointly organized by the AGH University of Science & Technology, Cracow, Poland; Ca’ Foscari University, Venice, Italy; and the University of Calcutta, India. The book is divided into two volumes, Volumes 3 and 4, and presents dissertation works in the areas of Image Processing, Biometrics-based Authentication, Soft Computing, Data Mining, Next-Generation Networking and Network Security, Remote Healthcare, Communications, Embedded Systems, Software Engineering and Service Engineering. The first two volumes of the book published the works presented at the ACSS 2015, which was held from May 23 to 25, 2015 in Kolkata, India.

  20. Computable Types for Dynamic Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); K. Ambos-Spies; B. Loewe; W. Merkle

    2009-01-01

    textabstractIn this paper, we develop a theory of computable types suitable for the study of dynamic systems in discrete and continuous time. The theory uses type-two effectivity as the underlying computational model, but we quickly develop a type system which can be manipulated abstractly, but for

  1. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  2. 8th International Symposium on Intelligent Distributed Computing & Workshop on Cyber Security and Resilience of Large-Scale Systems & 6th International Workshop on Multi-Agent Systems Technology and Semantics

    CERN Document Server

    Braubach, Lars; Venticinque, Salvatore; Badica, Costin

    2015-01-01

    This book represents the combined peer-reviewed proceedings of the Eight International Symposium on Intelligent Distributed Computing - IDC'2014, of the Workshop on Cyber Security and Resilience of Large-Scale Systems - WSRL-2014, and of the Sixth International Workshop on Multi-Agent Systems Technology and Semantics- MASTS-2014. All the events were held in Madrid, Spain, during September 3-5, 2014. The 47 contributions published in this book address several topics related to theory and applications of the intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, knowledge management, big data and ontologies, social networks, swarm intelligence or videogames amongst others.

  3. Industrial application of a graphics computer-based training system

    International Nuclear Information System (INIS)

    Klemm, R.W.

    1985-01-01

    Graphics Computer Based Training (GCBT) roles include drilling, tutoring, simulation and problem solving. Of these, Commonwealth Edison uses mainly tutoring, simulation and problem solving. These roles are not separate in any particular program. They are integrated to provide tutoring and part-task simulation, part-task simulation and problem solving, or problem solving tutoring. Commonwealth's Graphics Computer Based Training program was a result of over a year's worth of research and planning. The keys to the program are it's flexibility and control. Flexibility is maintained through stand alone units capable of program authoring and modification for plant/site specific users. Yet, the system has the capability to support up to 31 terminals with a 40 mb hard disk drive. Control of the GCBT program is accomplished through establishment of development priorities and a central development facility (Commonwealth Edison's Production Training Center)

  4. Computer security of NPP instrumentation and control systems: categorization

    International Nuclear Information System (INIS)

    Klevtsov, A.L.; Simonov, A.A.; Trubchaninov, S.A.

    2016-01-01

    The paper is devoted to studying categorization of NPP instrumentation and control (I&C) systems from the point of view of computer security and to consideration of the computer security levels and zones used by the International Atomic Energy Agency (IAEA). The paper also describes the computer security degrees and zones regulated by the International Electrotechnical Commission (IEC) standard. The computer security categorization of the systems used by the U.S. Nuclear Regulatory Commission (NRC) is presented. The experts analyzed the main differences in I&C systems computer security categorization accepted by the IAEA, IEC and U.S. NRC. The approaches to categorization that should be advisably used in Ukraine during the development of regulation on NPP I&C systems computer security are proposed in the paper

  5. 16th International Conference on Hybrid Intelligent Systems and the 8th World Congress on Nature and Biologically Inspired Computing

    CERN Document Server

    Haqiq, Abdelkrim; Alimi, Adel; Mezzour, Ghita; Rokbani, Nizar; Muda, Azah

    2017-01-01

    This book presents the latest research in hybrid intelligent systems. It includes 57 carefully selected papers from the 16th International Conference on Hybrid Intelligent Systems (HIS 2016) and the 8th World Congress on Nature and Biologically Inspired Computing (NaBIC 2016), held on November 21–23, 2016 in Marrakech, Morocco. HIS - NaBIC 2016 was jointly organized by the Machine Intelligence Research Labs (MIR Labs), USA; Hassan 1st University, Settat, Morocco and University of Sfax, Tunisia. Hybridization of intelligent systems is a promising research field in modern artificial/computational intelligence and is concerned with the development of the next generation of intelligent systems. The conference’s main aim is to inspire further exploration of the intriguing potential of hybrid intelligent systems and bio-inspired computing. As such, the book is a valuable resource for practicing engineers /scientists and researchers working in the field of computational intelligence and artificial intelligence.

  6. Improving ATLAS grid site reliability with functional tests using HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  7. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  8. Distributed Computing for the Pierre Auger Observatory

    International Nuclear Information System (INIS)

    Chudoba, J.

    2015-01-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system. (paper)

  9. Distributed Computing for the Pierre Auger Observatory

    Science.gov (United States)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  10. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  11. Lectures on Logic and Computation

    DEFF Research Database (Denmark)

    The European Summer School in Logic, Language and Information (ESSLLI) is organized every year by the Association for Logic, Language and Information (FoLLI) in different sites around Europe. The main focus of ESSLLI is on the interface between linguistics, logic and computation. ESSLLI offers fo...

  12. The Intelligent Safety System: could it introduce complex computing into CANDU shutdown systems

    International Nuclear Information System (INIS)

    Hall, J.A.; Hinds, H.W.; Pensom, C.F.; Barker, C.J.; Jobse, A.H.

    1984-07-01

    The Intelligent Safety System is a computerized shutdown system being developed at the Chalk River Nuclear Laboratories (CRNL) for future CANDU nuclear reactors. It differs from current CANDU shutdown systems in both the algorithm used and the size and complexity of computers required to implement the concept. This paper provides an overview of the project, with emphasis on the computing aspects. Early in the project several needs leading to an introduction of computing complexity were identified, and a computing system that met these needs was conceived. The current work at CRNL centers on building a laboratory demonstration of the Intelligent Safety System, and evaluating the reliability and testability of the concept. Some fundamental problems must still be addressed for the Intelligent Safety System to be acceptable to a CANDU owner and to the regulatory authorities. These are also discussed along with a description of how the Intelligent Safety System might solve these problems

  13. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...2012, “ Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems .” 2. The objective...2012 - 01/25/2015 4. TITLE AND SUBTITLE Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous

  14. Analyzing the security of an existing computer system

    Science.gov (United States)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  15. Challenge for knowledge information processing systems (preliminary report on Fifth Generation Computer Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Moto-oka, T

    1982-01-01

    The author explains the reasons, aims and strategies for the Fifth Generation Computer Project in Japan. The project aims to introduce a radical new breed of computer by 1990. This article outlines the economic and social reasons for the project. It describes the impacts and effects that these computers are expected to have. The areas of technology which will form the contents of the research and development are highlighted. These are areas such as VLSI technology, speech and image understanding systems, artificial intelligence and advanced architecture design. Finally a schedule for completion of research is given which aims for a completed project by 1990.

  16. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  17. Practical Application of Site-Specific Earthquake Early Warning (EEW) System

    International Nuclear Information System (INIS)

    Kanda, Katsuhisa

    2014-01-01

    The development of an on-site warning system was reported. This system improves the timing of warnings and reduces the number of false alarms by improving the method of estimating the JMA seismic intensity using earthquake early warning system information based on site-specific data. Moreover, the development of an application for practical use in a construction company and an integrated system for realizing system shutdown was also reported. The concept of this system is based on the following. Seismic intensity is not distributed concentrically, and the attenuation relationship cannot explain the distribution of seismic intensity precisely. The standard method of seismic intensity prediction is construed as 'attenuation relationship + soil amplification factor', but this may be improved in the reformulation 'original attenuation relationship for each site + correction factors dependent on the epicenter location and depth' using a seismic intensity database that includes data on recent and historical earthquakes. (authors)

  18. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  19. Computer information systems framework

    International Nuclear Information System (INIS)

    Shahabuddin, S.

    1989-01-01

    Management information systems (MIS) is a commonly used term in computer profession. The new information technology has caused management to expect more from computer. The process of supplying information follows a well defined procedure. MIS should be capable for providing usable information to the various areas and levels of organization. MIS is different from data processing. MIS and business hierarchy provides a good framework for many organization which are using computers. (A.B.)

  20. Computer analysis of protein functional sites projection on exon structure of genes in Metazoa.

    Science.gov (United States)

    Medvedeva, Irina V; Demenkov, Pavel S; Ivanisenko, Vladimir A

    2015-01-01

    Study of the relationship between the structural and functional organization of proteins and their coding genes is necessary for an understanding of the evolution of molecular systems and can provide new knowledge for many applications for designing proteins with improved medical and biological properties. It is well known that the functional properties of proteins are determined by their functional sites. Functional sites are usually represented by a small number of amino acid residues that are distantly located from each other in the amino acid sequence. They are highly conserved within their functional group and vary significantly in structure between such groups. According to this facts analysis of the general properties of the structural organization of the functional sites at the protein level and, at the level of exon-intron structure of the coding gene is still an actual problem. One approach to this analysis is the projection of amino acid residue positions of the functional sites along with the exon boundaries to the gene structure. In this paper, we examined the discontinuity of the functional sites in the exon-intron structure of genes and the distribution of lengths and phases of the functional site encoding exons in vertebrate genes. We have shown that the DNA fragments coding the functional sites were in the same exons, or in close exons. The observed tendency to cluster the exons that code functional sites which could be considered as the unit of protein evolution. We studied the characteristics of the structure of the exon boundaries that code, and do not code, functional sites in 11 Metazoa species. This is accompanied by a reduced frequency of intercodon gaps (phase 0) in exons encoding the amino acid residue functional site, which may be evidence of the existence of evolutionary limitations to the exon shuffling. These results characterize the features of the coding exon-intron structure that affect the functionality of the encoded protein and

  1. FRS (Facility Registration System) Sites, Geographic NAD83, EPA (2007) [facility_registration_system_sites_LA_EPA_2007

    Data.gov (United States)

    Louisiana Geographic Information Center — This dataset contains locations of Facility Registry System (FRS) sites which were pulled from a centrally managed database that identifies facilities, sites or...

  2. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  3. Large computer systems and new architectures

    International Nuclear Information System (INIS)

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  4. Computer-aided protective system (CAPS)

    International Nuclear Information System (INIS)

    Squire, R.K.

    1988-01-01

    A method of improving the security of materials in transit is described. The system provides a continuously monitored position location system for the transport vehicle, an internal computer-based geographic delimiter that makes continuous comparisons of actual positions with the preplanned routing and schedule, and a tamper detection/reaction system. The position comparison is utilized to institute preprogrammed reactive measures if the carrier is taken off course or schedule, penetrated, or otherwise interfered with. The geographic locater could be an independent internal platform or an external signal-dependent system utilizing GPS, Loran or similar source of geographic information; a small (micro) computer could provide adequate memory and computational capacity; the insurance of integrity of the system indicates the need for a tamper-proof container and built-in intrusion sensors. A variant of the system could provide real-time transmission of the vehicle position and condition to a central control point for; such transmission could be encrypted to preclude spoofing

  5. AmeriFlux Site and Data Exploration System

    Science.gov (United States)

    Krassovski, M.; Boden, T.; Yang, B.; Jackson, B.

    2011-12-01

    The AmeriFlux network was established in 1996. The network provides continuous observations of ecosystem-level exchanges of CO2, water, energy and momentum spanning diurnal, synoptic, seasonal, and interannual time scales. The current network, including both active and inactive sites, consists of 141 sites in North, Central, and South America. The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) provides data management support for the AmeriFlux network including long-term data storage and dissemination. AmeriFlux offers a broad suite of value-added data products: Level 1 data products at 30 minute or hourly time intervals provided by the site teams, Level 2 data processed by CDIAC and Level 3 and 4 files created using CarboEurope algorithms. CDIAC has developed a relational database to house the vast array of AmeriFlux data and information and a web-based interface to the database, the AmeriFlux Site and Data Exploration System (http://ameriflux.ornl.gov), to help users worldwide identify, and more recently, download desired AmeriFlux data. AmeriFlux and CDIAC offer numerous value-added AmeriFlux data products (i.e., Level 1-4 data products, biological data) and most of these data products are or will be available through the new data system. Vital site information (e.g., location coordinates, dominant species, land-use history) is also displayed in the new system. The data system provides numerous ways to explore and extract data. Searches can be done by site, location, measurement status, available data products, vegetation types, and by reported measurements just to name a few. Data can be accessed through the links to full data sets reported by a site, organized by types of data products, or by creating customized datasets based on user search criteria. The new AmeriFlux download module contains features intended to ease compliance of the AmeriFlux fair-use data policy, acknowledge the contributions of submitting

  6. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  7. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Manual Brachytherapy § 35.457 Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning...

  8. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 2: Impact Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Arimescu, Carmen; Kanyid, Beverly A.; Miley, Terri B.

    2001-01-01

    One activity of the Department of Energy?s Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that generate human, ecological, economic, and cultural impacts

  9. Configurating computer-controlled bar system

    OpenAIRE

    Šuštaršič, Nejc

    2010-01-01

    The principal goal of my diploma thesis is creating an application for configurating computer-controlled beverages dispensing systems. In the preamble of my thesis I present the theoretical platform for point of sale systems and beverages dispensing systems, which are required for the understanding of the target problematics. As with many other fields, computer tehnologies entered the field of managing bars and restaurants quite some time ago. Basic components of every bar or restaurant a...

  10. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  11. Development of a portable computed tomographic scanner for on-line imaging of industrial piping systems

    International Nuclear Information System (INIS)

    Jaafar Abdullah; Mohd Arif Hamzah; Mohd Soyapi Mohd Yusof; Mohd Fitri Abdul Rahman; Fadil IsmaiI; Rasif Mohd Zain

    2003-01-01

    Computed tomography (CT) technology is being increasingly developed for industrial application. This paper presents the development of a portable computed tomographic scanner for on?line imaging of industrial piping systems. The theoretical approach, the system hardware, the data acquisition system and the adopted algorithm for image reconstruction are discussed. The scanner has large potential to be used to determine the extent of corrosion under insulation (CUI), to detect blockages, to measure the thickness of deposit/materials built-up on the walls and to improve understanding of material flow in pipelines. (Author)

  12. Computer control of the titanium getter system on the tandem mirror experiment-upgrade (TMX-U)

    International Nuclear Information System (INIS)

    McAlice, A.J.; Bork, R.G.; Clower, C.A.; Moore, T.L.; Lang, D.D.; Pico, R.E.

    1983-01-01

    Gettering has been a standard technique for achieving high-quality vacuum in fusion experiments for some time. On Lawrence Livermore National Laboratory's Tandem Mirror Experiment (TMX-U), an extensive gettering system is utilized with liquid-nitrogen-cooled panels to provide the fast pumping during each physics experiment. The getter wires are a 85% titanium and 15% tantalum alloy directly heated by an electrical current. TMX-U has 162 getter power-supply channels; each channel supplies approximately 106 A of regulated power to each getter for a 60-s cycle. In the vacuum vessel, the getter wires are organized into poles or arrays. On each pole there are six getter wires, each cables to the exterior of the vessel. This arrangement allows the power supplies to be switched from getter wire to getter wire as the individual wires deteriorate after 200 to 300 gettering cycles. To control the getter power suppiles, we will install a computer system to operate the system and document the performance of each getter circuit. This computer system will control the 162 power supplies via a Computer Automated Measurement and Control (CAMAC) architecture with a fiber-optic serial highway. Getter wire history will be stored on the built-in 10 megabyte disc drive with new entries backed up daily on a floppy disc. Overall, this system will allow positive tracking of getter wire condition, document the total gettering performance, and predict getter maintenance/changeover cycles. How we will employ the computer system to enhance the getter system is the subject of this paper

  13. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  14. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  15. Fuzzy systems and soft computing in nuclear engineering

    International Nuclear Information System (INIS)

    Ruan, D.

    2000-01-01

    This book is an organized edited collection of twenty-one contributed chapters covering nuclear engineering applications of fuzzy systems, neural networks, genetic algorithms and other soft computing techniques. All chapters are either updated review or original contributions by leading researchers written exclusively for this volume. The volume highlights the advantages of applying fuzzy systems and soft computing in nuclear engineering, which can be viewed as complementary to traditional methods. As a result, fuzzy sets and soft computing provide a powerful tool for solving intricate problems pertaining in nuclear engineering. Each chapter of the book is self-contained and also indicates the future research direction on this topic of applications of fuzzy systems and soft computing in nuclear engineering. (orig.)

  16. Computer systems and nuclear industry

    International Nuclear Information System (INIS)

    Nkaoua, Th.; Poizat, F.; Augueres, M.J.

    1999-01-01

    This article deals with computer systems in nuclear industry. In most nuclear facilities it is necessary to handle a great deal of data and of actions in order to help plant operator to drive, to control physical processes and to assure the safety. The designing of reactors requires reliable computer codes able to simulate neutronic or mechanical or thermo-hydraulic behaviours. Calculations and simulations play an important role in safety analysis. In each of these domains, computer systems have progressively appeared as efficient tools to challenge and master complexity. (A.C.)

  17. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  18. Laboratory information management system at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, W.; Barth, D.; Ibsen, T.; Newman, B.

    1994-03-01

    In January of 1994 an important new technology was brought on line to help in the monumental waste management and environmental restoration work at the Hanford Site. Cleanup at the Hanford Site depends on analytical chemistry information to identify contaminates, design and monitor cleanup processes, assure worker safety, evaluate progress, and prove completion. The new technology, a laboratory information management system (LIMS) called ``LABCORE,`` provides the latest systems to organize and communicate the analytical tasks: track work and samples; collect and process data, prepare reports, and store data in readily accessible electronic form.

  19. Laboratory information management system at the Hanford Site

    International Nuclear Information System (INIS)

    Leggett, W.; Barth, D.; Ibsen, T.; Newman, B.

    1994-03-01

    In January of 1994 an important new technology was brought on line to help in the monumental waste management and environmental restoration work at the Hanford Site. Cleanup at the Hanford Site depends on analytical chemistry information to identify contaminates, design and monitor cleanup processes, assure worker safety, evaluate progress, and prove completion. The new technology, a laboratory information management system (LIMS) called ''LABCORE,'' provides the latest systems to organize and communicate the analytical tasks: track work and samples; collect and process data, prepare reports, and store data in readily accessible electronic form

  20. Software Quality Measurement for Distributed Systems. Volume 3. Distributed Computing Systems: Impact on Software Quality.

    Science.gov (United States)

    1983-07-01

    Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video

  1. 75 FR 76038 - Zach System Corporation a Subdivision of Zambon Company, SPA Including On-Site Leased Workers of...

    Science.gov (United States)

    2010-12-07

    ... Subdivision of Zambon Company, SPA Including On-Site Leased Workers of Turner Industries and Go Johnson, La..., including on-site leased workers from Turner Industries and Go Johnson, La Porte, Texas. The Department's... investigation revealed that Zach System Corporation is a subdivision of Zambon Company, SPA, not Zach System SPA...

  2. Computational system for geostatistical analysis

    Directory of Open Access Journals (Sweden)

    Vendrusculo Laurimar Gonçalves

    2004-01-01

    Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.

  3. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  4. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  5. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  6. A method for site-dependent planning and its application to the preselection of sites for thermal power plants

    International Nuclear Information System (INIS)

    Friedrich, R.

    1979-01-01

    In the first part of the paper a computer-aided method for dealing with the problems of site-dependent planning is described. By means of the modular program system COPLAN complex conjunction between locally varying data can be performed rapidly and accurately with respect to spatial orientation. The system consists of data input, numerous ways of processing, and graphical representation of the results. The second part shows the application of the system to preselection of sites for thermal power plants. By means of a method analyzing its usefulness, the suitability of each point in (the German Federal State of) Baden-Wuerttemberg as a power plant site is determined. Compared with the currently used methods of preliminary site selection the present method is distinguished by area-covering calculation, the possibility of balancing up advantages and disadvantages, as well as transparency and suitability for being checked up. The paper establishes and considers criteria from the fields of operational economy, safety, ecology, and district planning. The computations are performed for different orders of preference. It is shown that there are regions of sites which are acceptable with respect to a large spectrum of object systems. (orig.) [de

  7. Studies on computer-aided diagnosis systems for chest radiographs and mammograms (in Japanese)

    International Nuclear Information System (INIS)

    Hara, Takeshi

    2001-01-01

    This thesis describes computer-aided diagnosis (CAD) systems for chest radiographs and mammograms. Preprocessing and imaging processing methods for each CAD system include dynamic range compression and region segmentation technique. A new pattern recognition technique combines genetic algorithms with template matching methods to detect lung nodules. A genetic algorithm was employed to select the optimal shape of simulated nodular shadows to be compared with real lesions on digitized chest images. Detection performance was evaluated using 332 chest radiographs from the database of the Japanese Society of Radiological Technology. Our average true-positive rate was 72.8% with an average of 11 false-positive findings per image. A new detection method using high resolution digital images with 0.05 mm sampling is also proposed for the mammogram CAD system to detect very small microcalcifications. An automated classification method uses feature extraction based on fractal dimension analysis of masses. Using over 200 cases to evaluate the detection of mammographic masses and calcifications, the detection rate of masses and microcalcifications were 87% and 96% with 1.5 and 1.8 false-positive findings, respectively. The classification performance on benign vs malignant lesions, the Az values that were defined by the areas under the ROC curves derived from classification schemes of masses and microcalcifications were 0.84 and 0.89. To demonstrate the practicality of these CAD systems in a computer-network environment, we propose to use the mammogram CAD system via the Internet and WWW. A common gateway interface and server-client approach for the CAD system via the Internet will permit display of the CAD results on ordinary computers

  8. Dashboard applications to monitor experiment activities at sites

    Energy Technology Data Exchange (ETDEWEB)

    Andreeva, Julia; Gaidioz, Benjamin; Grigoras, Costin; Kokoszkiewicz, Lukasz; Lanciotti, Elisa; Rocha, Ricardo; Saiz, Pablo; Santinelli, Roberto; Sidorova, Irina; Sciaba, Andrea [CERN, European Organization for Nuclear Research (Switzerland); Belforte, Stefano [INFN Trieste (Italy); Boehm, Max [EDS, an HP Company, Plano, TX (United States); Casajus, Adrian [Universitat de Barcelona (Spain); Flix, Josep [PIC, Port d' Informacio CientIfica, Bellaterra (Spain); Tsaregorodtsev, Andrei, E-mail: Elisa.Lanciotti@cern.c, E-mail: Pablo.Saiz@cern.c [CPPM Marseille (France)

    2010-04-01

    In the framework of a distributed computing environment, such as WLCG, monitoring has a key role in order to keep under control activities going on in sites located in different countries and involving people based in many different sites. To be able to cope with such a large scale heterogeneous infrastructure, it is necessary to have monitoring tools providing a complete and reliable view of the overall performance of the sites. Moreover, the structure of a monitoring system critically depends on the object to monitor and on the users it is addressed to. In this article we will describe two different monitoring systems both aimed to monitor activities and services provided in the WLCG framework, but designed in order to meet the requirements of different users: Site Status Board has an overall view of the services available in all the sites supporting an experiment, whereas Siteview provides a complete view of all the activities going on at a site, for all the experiments supported by the site.

  9. Dashboard applications to monitor experiment activities at sites

    International Nuclear Information System (INIS)

    Andreeva, Julia; Gaidioz, Benjamin; Grigoras, Costin; Kokoszkiewicz, Lukasz; Lanciotti, Elisa; Rocha, Ricardo; Saiz, Pablo; Santinelli, Roberto; Sidorova, Irina; Sciaba, Andrea; Belforte, Stefano; Boehm, Max; Casajus, Adrian; Flix, Josep; Tsaregorodtsev, Andrei

    2010-01-01

    In the framework of a distributed computing environment, such as WLCG, monitoring has a key role in order to keep under control activities going on in sites located in different countries and involving people based in many different sites. To be able to cope with such a large scale heterogeneous infrastructure, it is necessary to have monitoring tools providing a complete and reliable view of the overall performance of the sites. Moreover, the structure of a monitoring system critically depends on the object to monitor and on the users it is addressed to. In this article we will describe two different monitoring systems both aimed to monitor activities and services provided in the WLCG framework, but designed in order to meet the requirements of different users: Site Status Board has an overall view of the services available in all the sites supporting an experiment, whereas Siteview provides a complete view of all the activities going on at a site, for all the experiments supported by the site.

  10. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  11. New Computer Account Management System on 22 November

    CERN Multimedia

    IT Department

    2010-01-01

    On 22 November, the current management system called CRA was replaced by a new self-service tool available on a Web Portal. The End-Users can now manage their computer accounts and resources themselves through this Web Portal. The ServiceDesk will provide help or forward requests to the appropriate support line in case of specific requests. Account management tools The Account Management Portal allows you to: Manage your primary account; Change your password; Create and manage secondary and service accounts; Manage application and resource authorizations and settings; Find help and documentation concerning accounts and resources. Get Help In the event of any questions or problems, please contact the ServiceDesk (phone +41 22 767 8888 or it.servicedesk@cern.ch) The Account Management Team

  12. Conversion of Hanford site well locations to Washington coordinate system of 1983, South Zone 1991 (WCS83S)

    International Nuclear Information System (INIS)

    Burnett, R.A.; Tzemos, S.; Dietz, L.A.

    1993-12-01

    Past construction and survey practices have resulted in the use of multiple local coordinate systems for measuring and reporting the horizontal position of wells and other facilities and locations on the Hanford Site. This report describes the development of a coordinate transformation process and algorithm and its application to the conversion of the horizontal coordinates of Hanford site wells from the various local coordinate systems and datums to a single standard coordinate system, the Washington Coordinate system of 1983, South Zone 1991 (WCS83S). The coordinate transformation algorithm, implemented as a computer program called CTRANS, uses standard two-dimensional translation, rotation, and scaling transformation equations and can be applied to any set of horizontal point locations. For each point to be transformed, the coefficients of the transformation equations are calculated locally, using the coordinates of the three nearest registration points (points with known locations in both coordinate systems). The report contains a discussion of efforts to verify and validate both the software and the well location data, a description of the methods used to estimate transformation and registration point accuracy, instructions for using the computer program, and a summary of the Hanford well conversion results for each local coordinate system and datum. Also included are the results of using recent U.S. Army Corps of Engineers survey data to obtain estimated measures of location errors in wells for which the local coordinate data source is undocumented, unverified, and therefore of unknown accuracy

  13. Upper Basalt-Confined Aquifer System in the Southern Hanford Site

    International Nuclear Information System (INIS)

    Thorne, P.

    1999-01-01

    The 1990 DOE Tiger Team Finding GW/CF-202 found that the hydrogeologic regime at the Hanford Site was inadequately characterized. This finding also identified the need for completing a study of the confined aquifer in the central and southern portions of the Hanford Site. The southern portion of the site is of particular interest because hydraulic-head patterns in the upper basalt-confined aquifer system indicate that groundwater from the Hanford central plateau area, where contaminants have been found in the aquifer, flows southeast toward the southern site boundary. This results in a potential for offsite migration of contaminants through the upper basalt-confined aquifer system. Based on the review presented in this report, available hydrogeologic characterization information for the upper basalt-confined aquifer system in this area is considered adequate to close the action item. Recently drilled offsite wells have provided additional information on the structure of the aquifer system in and near the southern part of the Hanford Site. Information on hydraulic properties, hydrochemistry, hydraulic heads and flow directions for the upper basalt-confined aquifer system has been re-examined and compiled in recent reports including Spane and Raymond (1993), Spane and Vermeul ( 1994), and Spane and Webber (1995)

  14. Methods and apparatuses for information analysis on shared and distributed computing systems

    Science.gov (United States)

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  15. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  16. An external irradiation treatment planning system installed on a personal computer. 137

    International Nuclear Information System (INIS)

    Kunieda, Etso; Ogawa, Koichi; Mita, Kazumasa; Sekiguchi, Kozo; Wada, Tadashi; Hashimoto, Shyozo

    1987-01-01

    A compact and practical treatment planning system for external photon therapy has been developed for use on a desk-top personal computer. The system calculates the dose distributions of inhomogeneous density fields by using the CT-value of each pixel and displays isodose curves on the CRT superimposed on the gray scale CT image or on a hard-copy. Inhomogeneity correction is based on the TAR method where the path length from the calculation point to the surface is determined by summing up electron density derived from the CT-values of pixels on the path. Wedge filter correction is also available by using stored geometric data. The contour of patients is acquired by tracting the CT image on the light panel of the digitizer, or directly from the digital CT data. Though some critical parts of the programs are written in machine language, the system is mostly in BASIC and C languages. The minimum required hardware consists of a MS-DOS based personal computer, a color CRT display, an 8 inch floppy disk drive and a digitizer. They are generally available in Japan at reasonable cost. Tests were carried out in homogeneous and inhomogeneous density phantoms to evaluate the accuracy of the acquired dosage, and showed reasonable results compared with other commercially available treatment planning systems. The overall calculation time is satisfactory for multiple beam calculations. 5 refs.; 3 figs

  17. 'Micro-8' micro-computer system

    International Nuclear Information System (INIS)

    Yagi, Hideyuki; Nakahara, Yoshinori; Yamada, Takayuki; Takeuchi, Norio; Koyama, Kinji

    1978-08-01

    The micro-computer Micro-8 system has been developed to organize a data exchange network between various instruments and a computer group including a large computer system. Used for packet exchangers and terminal controllers, the system consists of ten kinds of standard boards including a CPU board with INTEL-8080 one-chip-processor. CPU architecture, BUS architecture, interrupt control, and standard-boards function are explained in circuit block diagrams. Operations of the basic I/O device, digital I/O board and communication adapter are described with definitions of the interrupt ramp status, I/O command, I/O mask, data register, etc. In the appendixes are circuit drawings, INTEL-8080 micro-processor specifications, BUS connections, I/O address mappings, jumper connections of address selection, and interface connections. (author)

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  19. Surface system Forsmark. Site descriptive modelling SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Lindborg, Tobias [ed.

    2008-12-15

    SKB has undertaken site characterization of two different areas, Forsmark and Laxemar-Simpevarp, in order to find a suitable location for a geological repository for spent nuclear fuel. This report focuses on the site descriptive modelling of the surface system at Forsmark. The characterization of the surface system at the site was primarily made by identifying and describing important properties in different parts of the surface system, properties concerning e.g. hydrology and climate, Quaternary deposits and soils, hydrochemistry, vegetation, ecosystem functions, but also current and historical land use. The report presents available input data, methodology for data evaluation and modelling, and resulting models for each of the different disciplines. Results from the modelling of the surface system are also integrated with results from modelling of the deep bedrock system. The Forsmark site is located within the municipality of Oesthammar, about 120 km north of Stockholm. The investigated area is located along the shoreline of Oeregrundsgrepen, a funnel-shaped bay of the Baltic Sea. The area is characterized by small-scale topographic variations and is almost entirely located at altitudes lower than 20 metres above sea level. The Quaternary deposits in the area are dominated by till, characterized by a rich content of calcite which was transported by the glacier ice to the area from the sedimentary bedrock of Gaevlebukten about 100 km north of Forsmark. As a result, the surface waters and shallow groundwater at Forsmark are characterized by high pH values and high concentrations of certain major constituents, especially calcium and bicarbonate. The annual precipitation and runoff are 560 and 150 mm, respectively. The lakes are small and shallow, with mean and maximum depths ranging from approximately 0.1 to 1 m and 0.4 to 2 m. Sea water flows into the most low-lying lakes during events giving rise to very high sea levels. Wetlands are frequent and cover 25 to 35

  20. Terahertz Computed Tomography of NASA Thermal Protection System Materials

    Science.gov (United States)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2011-01-01

    A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.

  1. Life system modeling and intelligent computing. Pt. II. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part II of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 56 papers in this volume are organized in topical sections on advanced evolutionary computing theory and algorithms; advanced neural network and fuzzy system theory and algorithms; modeling and simulation of societies and collective behavior; biomedical signal processing, imaging, and visualization; intelligent computing and control in distributed power generation systems; intelligent methods in power and energy infrastructure development; intelligent modeling, monitoring, and control of complex nonlinear systems. (orig.)

  2. A study on the nuclear computer codes installation and management system

    International Nuclear Information System (INIS)

    Kim, Yeon Seung; Huh, Young Hwan; Kim, Hee Kyung; Kang, Byung Heon; Kim, Ko Ryeo; Suh, Soong Hyok; Choi, Young Gil; Lee, Jong Bok

    1990-12-01

    From 1987 a number of technical transfer related to nuclear power plant had been performed from C-E for YGN 3 and 4 construction. Among them, installation and management of the computer codes for YGN 3 and 4 fuel and nuclear steam supply system was one of the most important project. Main objectives of this project are to establish the nuclear computer code management system, to develop QA procedure for nuclear codes, to secure the nuclear code reliability and to extend techanical applicabilities including the user-oriented utility programs for nuclear codes. Contents of performing the project in this year was to produce 215 transmittal packages of nuclear codes installation including making backup magnetic tape and microfiche for software quality assurance. Lastly, for easy reference about the nuclear codes information we presented list of code names and information on the codes which were introduced from C-E. (Author)

  3. Installation and management of the SPS and LEP control system computers

    International Nuclear Information System (INIS)

    Bland, Alastair

    1994-01-01

    Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them. ((orig.))

  4. Reviews of computing technology: Software overview

    Energy Technology Data Exchange (ETDEWEB)

    Hartshorn, W.R.; Johnson, A.L.

    1994-01-05

    The Savannah River Site Computing Architecture states that the site computing environment will be standards-based, data-driven, and workstation-oriented. Larger server systems deliver needed information to users in a client-server relationship. Goals of the Architecture include utilizing computing resources effectively, maintaining a high level of data integrity, developing a robust infrastructure, and storing data in such a way as to promote accessibility and usability. This document describes the current storage environment at Savannah River Site (SRS) and presents some of the problems that will be faced and strategies that are planned over the next few years.

  5. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    Science.gov (United States)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  6. An integrated on-line system for the evaluation of ECG patterns with a small process computer

    International Nuclear Information System (INIS)

    Schoffa, G.; Eggenberger, O.; Krueger, G.; Karlsruhe Univ.

    1975-01-01

    This paper describes an on-line system for ECG processing with a small computer (8K memory) and a magnetic tape cassette for mass storage capable to evaluate 30 ECG patterns in a twelfe lead system per day. The use of a small computer was possible by a compact and easy-to-handle operating system and space-saving programs. The system described was specifically intended for use in smaller hospitals with a low number of ECG's per day which do not allow an economic operation of greater DP installations. The economy calculations, based on the 'Break-even-point method' with special regard to the installations, mainennance and personnel costs already grant an economic operation of a small computer at a rate of 5 ECG's per day. (orig.) [de

  7. Predicting the Metabolic Sites by Flavin-Containing Monooxygenase on Drug Molecules Using SVM Classification on Computed Quantum Mechanics and Circular Fingerprints Molecular Descriptors.

    Directory of Open Access Journals (Sweden)

    Chien-Wei Fu

    Full Text Available As an important enzyme in Phase I drug metabolism, the flavin-containing monooxygenase (FMO also metabolizes some xenobiotics with soft nucleophiles. The site of metabolism (SOM on a molecule is the site where the metabolic reaction is exerted by an enzyme. Accurate prediction of SOMs on drug molecules will assist the search for drug leads during the optimization process. Here, some quantum mechanics features such as the condensed Fukui function and attributes from circular fingerprints (called Molprint2D are computed and classified using the support vector machine (SVM for predicting some potential SOMs on a series of drugs that can be metabolized by FMO enzymes. The condensed Fukui function fA- representing the nucleophilicity of central atom A and the attributes from circular fingerprints accounting the influence of neighbors on the central atom. The total number of FMO substrates and non-substrates collected in the study is 85 and they are equally divided into the training and test sets with each carrying roughly the same number of potential SOMs. However, only N-oxidation and S-oxidation features were considered in the prediction since the available C-oxidation data was scarce. In the training process, the LibSVM package of WEKA package and the option of 10-fold cross validation are employed. The prediction performance on the test set evaluated by accuracy, Matthews correlation coefficient and area under ROC curve computed are 0.829, 0.659, and 0.877 respectively. This work reveals that the SVM model built can accurately predict the potential SOMs for drug molecules that are metabolizable by the FMO enzymes.

  8. DIII-D tokamak control and neutral beam computer system upgrades

    International Nuclear Information System (INIS)

    Penaflor, B.G.; McHarg, B.B.; Piglowski, D.A.; Pham, D.; Phillips, J.C.

    2004-01-01

    This paper covers recent computer system upgrades made to the DIII-D tokamak control and neutral beam computer systems. The systems responsible for monitoring and controlling the DIII-D tokamak and injecting neutral beam power have recently come online with new computing hardware and software. The new hardware and software have provided a number of significant improvements over the previous Modcomp AEG VME and accessware based systems. These improvements include the incorporation of faster, less expensive, and more readily available computing hardware which have provided performance increases of up to a factor 20 over the prior systems. A more modern graphical user interface with advanced plotting capabilities has improved feedback to users on the operating status of the tokamak and neutral beam systems. The elimination of aging and non supportable hardware and software has increased overall maintainability. The distinguishing characteristics of the new system include: (1) a PC based computer platform running the Redhat version of the Linux operating system; (2) a custom PCI CAMAC software driver developed by general atomics for the kinetic systems 2115 serial highway card; and (3) a custom developed supervisory control and data acquisition (SCADA) software package based on Kylix, an inexpensive interactive development environment (IDE) tool from borland corporation. This paper provides specific details of the upgraded computer systems

  9. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  10. Design and Study of a Next-Generation Computer-Assisted System for Transoral Laser Microsurgery

    Directory of Open Access Journals (Sweden)

    Nikhil Deshpande PhD

    2018-05-01

    Full Text Available Objective To present a new computer-assisted system for improved usability, intuitiveness, efficiency, and controllability in transoral laser microsurgery (TLM. Study Design Pilot technology feasibility study. Setting A dedicated room with a simulated TLM surgical setup: surgical microscope, surgical laser system, instruments, ex vivo pig larynxes, and computer-assisted system. Subjects and Methods The computer-assisted laser microsurgery (CALM system consists of a novel motorized laser micromanipulator and a tablet- and stylus-based control interface. The system setup includes the Leica 2 surgical microscope and the DEKA HiScan Surgical laser system. The system was validated through a first-of-its-kind observational study with 57 international surgeons with varied experience in TLM. The subjects performed real surgical tasks on ex vivo pig larynxes in a simulated TLM scenario. The qualitative aspects were established with a newly devised questionnaire assessing the usability, efficiency, and suitability of the system. Results The surgeons evaluated the CALM system with an average score of 6.29 (out of 7 in ease of use and ease of learning, while an average score of 5.96 was assigned for controllability and safety. A score of 1.51 indicated reduced workload for the subjects. Of 57 subjects, 41 stated that the CALM system allows better surgical quality than the existing TLM systems. Conclusions The CALM system augments the usability, controllability, and efficiency in TLM. It enhances the ergonomics and accuracy beyond the current state of the art, potentially improving the surgical safety and quality. The system offers the intraoperative automated scanning of customized long incisions achieving uniform resections at the surgical site.

  11. WHALE, a management tool for Tier-2 LCG sites

    Science.gov (United States)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  12. Opinions on Computing Education in Korean K-12 System: Higher Education Perspective

    Science.gov (United States)

    Kim, Dae-Kyoo; Jeong, Dongwon; Lu, Lunjin; Debnath, Debatosh; Ming, Hua

    2015-01-01

    The need for computing education in the K-12 curriculum has grown globally. The Republic of Korea is not an exception. In response to the need, the Korean Ministry of Education has announced an outline for software-centric computing education in the K-12 system, which aims at enhancing the current computing education with software emphasis. In…

  13. Life system modeling and intelligent computing. Pt. I. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part I of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 55 papers in this volume are organized in topical sections on intelligent modeling, monitoring, and control of complex nonlinear systems; autonomy-oriented computing and intelligent agents; advanced theory and methodology in fuzzy systems and soft computing; computational intelligence in utilization of clean and renewable energy resources; intelligent modeling, control and supervision for energy saving and pollution reduction; intelligent methods in developing vehicles, engines and equipments; computational methods and intelligence in modeling genetic and biochemical networks and regulation. (orig.)

  14. The Influence of Computer-Mediated Communication Systems on Community

    Science.gov (United States)

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  15. A state-of-the-art report on software operation structure of the digital control computer system

    International Nuclear Information System (INIS)

    Kim, Bong Kee; Lee, Kyung Hoh; Joo, Jae Yoon; Jang, Yung Woo; Shin, Hyun Kook

    1994-06-01

    CANDU Nuclear Power Plants including Wolsong 1 and 2/3/4 are controlled by a real-time plant control computer system. This report was written to provide an overview on the station control computer software which belongs to one of the most advanced real-time computing application area, along with the Fuel Handling Machine design concepts. The combination of well designed control computer and Fuel Handling Machine allow changing fuel bundles while the plant is in operation. Design methodologies and software structure are discussed along with the interface between the two systems. 29 figs., 2 tabs., 20 refs. (Author)

  16. A state-of-the-art report on software operation structure of the digital control computer system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bong Kee; Lee, Kyung Hoh; Joo, Jae Yoon; Jang, Yung Woo; Shin, Hyun Kook [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    CANDU Nuclear Power Plants including Wolsong 1 and 2/3/4 are controlled by a real-time plant control computer system. This report was written to provide an overview on the station control computer software which belongs to one of the most advanced real-time computing application area, along with the Fuel Handling Machine design concepts. The combination of well designed control computer and Fuel Handling Machine allow changing fuel bundles while the plant is in operation. Design methodologies and software structure are discussed along with the interface between the two systems. 29 figs., 2 tabs., 20 refs. (Author).

  17. Human Pacman: A Mobile Augmented Reality Entertainment System Based on Physical, Social, and Ubiquitous Computing

    Science.gov (United States)

    Cheok, Adrian David

    This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.

  18. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  19. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  20. Virtualization and cloud computing in dentistry.

    Science.gov (United States)

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  1. Operating System Concepts for Reconfigurable Computing: Review and Survey

    OpenAIRE

    Marcel Eckert; Dominik Meyer; Jan Haase; Bernd Klauer

    2016-01-01

    One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The arti...

  2. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  3. The computer aided education and training system for accident management

    International Nuclear Information System (INIS)

    Yoneyama, Mitsuru; Kubota, Ryuji; Fujiwara, Tadashi; Sakuma, Hitoshi

    1999-01-01

    The education and training system for Accident Management was developed by the Japanese BWR group and Hitachi Ltd. The education and training system is composed of two systems. One is computer aided instruction (CAI) education system and the education and training system with computer simulations. Both systems are designed to be executed on personal computers. The outlines of the CAI education system and the education and training system with simulator are reported below. These systems provides plant operators and technical support center staff with the effective education and training for accident management. (author)

  4. Roadmap to the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  5. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  6. SiteDB: Marshalling people and resources available to CMS

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S [H.H. Wills Physics Laboratory, Bristol (United Kingdom); Bonacorsi, D [University of Bologna and INFN Bologna (Italy); Ferreira, M Dias [SPRACE (Brazil); Egeland, R [University of Minnesota, Twin Cities (United States)

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  7. SiteDB: Marshalling people and resources available to CMS

    International Nuclear Information System (INIS)

    Metson, S; Bonacorsi, D; Ferreira, M Dias; Egeland, R

    2010-01-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  8. Statistical properties of dynamical systems – Simulation and abstract computation

    International Nuclear Information System (INIS)

    Galatolo, Stefano; Hoyrup, Mathieu; Rojas, Cristóbal

    2012-01-01

    Highlights: ► A survey on results about computation and computability on the statistical properties of dynamical systems. ► Computability and non-computability results for invariant measures. ► A short proof for the computability of the convergence speed of ergodic averages. ► A kind of “constructive” version of the pointwise ergodic theorem. - Abstract: We survey an area of recent development, relating dynamics to theoretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical systems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approximation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible. We also explain how it is possible to compute the speed of convergence of ergodic averages (when the system is known exactly) and how this entails the computation of arbitrarily good approximations of points of the space having typical statistical behaviour (a sort of constructive version of the pointwise ergodic theorem).

  9. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  11. OPTIMIZING MAINTENANCE PROCESSES ON CUSTOMER SITE IN A DECENTRALIZED ORGANIZATION BASED ON MULTI-SITE TEAMS

    Directory of Open Access Journals (Sweden)

    Jorge Moutinho

    2015-03-01

    Full Text Available This lecture focuses on the complexity to manage and optimize maintenance processes, operations and service tasks to equipments and systems installed at customer sites. Different locations, access and working environment may compromise any standardization of setup's and operations. Multi-site teams based on geographic strategic locations, adds complexity to trainning, communication, supervising and monitoring processes. Logistics and information systems assume relevant rolls to consolidate global performance. Beside efficiency, effectiveness productivity and flexibility, field teams need skills on autonomy responsibility and proactivity. This lecture also explores the needed adaptation of most part of available literature, normally based on production sites, as also of Lean- Kaizen principles to the fact that services can not be stocked, quality is normally more difficult to measure and customer is normally present when and where service is produced.

  12. Development and trial operation of a site-wide computerized material accounting system at Kurchatov Institute

    International Nuclear Information System (INIS)

    Roumiantsev, A.N.; Ostroumov, Y.A.; Yevstropov, A.V.

    1997-01-01

    Since August 1994 Kurchatov Institute in cooperation with several US Department of Energy Laboratories has been developing a site-wide computerized material accounting system for nuclear materials. In 1994 a prototype system was put into trial operation at two Kurchatov facilities. Evaluation of this prototype led to the development of a new computerized material accounting system named KI-MACS, which has been operational since 1996. This system is a site-wide local secure computer network with centralized database capable of dealing with strictly confidential data and performing near-real time accountancy. It utilizes a Microsoft Windows NT operating system with SQL Server and Visual Basic, and has a 'star'-like network architecture. KI-MACS is capable of dealing with materials in itemized and bulk form, and can perform statistical evaluations of measurements and material balance. KI-MACS is fully integrated with bar code equipment, electronic scales, gamma-ray spectrometers and an Active Well Coincidence Counter, thus providing almost on-line evaluation and utilization of results of measurements, item identification and accounting. At present KI-MACS is being used in Physical Inventory Taking at the Kurchatov Central Storage Facility, and by the end of 1997 will be installed at twelve Kurchatov nuclear facilities

  13. Development and trial operation of a site-wide computerized material accounting system at Kurchatov Institute

    Energy Technology Data Exchange (ETDEWEB)

    Roumiantsev, A.N.; Ostroumov, Y.A.; Yevstropov, A.V. [Kurchatov Institute RRC, Moscow (Russian Federation)] [and others

    1997-11-01

    Since August 1994 Kurchatov Institute in cooperation with several US Department of Energy Laboratories has been developing a site-wide computerized material accounting system for nuclear materials. In 1994 a prototype system was put into trial operation at two Kurchatov facilities. Evaluation of this prototype led to the development of a new computerized material accounting system named KI-MACS, which has been operational since 1996. This system is a site-wide local secure computer network with centralized database capable of dealing with strictly confidential data and performing near-real time accountancy. It utilizes a Microsoft Windows NT operating system with SQL Server and Visual Basic, and has a `star`-like network architecture. KI-MACS is capable of dealing with materials in itemized and bulk form, and can perform statistical evaluations of measurements and material balance. KI-MACS is fully integrated with bar code equipment, electronic scales, gamma-ray spectrometers and an Active Well Coincidence Counter, thus providing almost on-line evaluation and utilization of results of measurements, item identification and accounting. At present KI-MACS is being used in Physical Inventory Taking at the Kurchatov Central Storage Facility, and by the end of 1997 will be installed at twelve Kurchatov nuclear facilities.

  14. Computer automation of a dilution cryogenic system

    International Nuclear Information System (INIS)

    Nogues, C.

    1992-09-01

    This study has been realized in the framework of studies on developing new technic for low temperature detectors for neutrinos and dark matter. The principles of low temperature physics and helium 4 and dilution cryostats, are first reviewed. The cryogenic system used and the technic for low temperature thermometry and regulation systems are then described. The computer automation of the dilution cryogenic system involves: numerical measurement of the parameter set (pressure, temperature, flow rate); computer assisted operating of the cryostat and the pump bench; numerical regulation of pressure and temperature; operation sequence full automation allowing the system to evolve from a state to another (temperature descent for example)

  15. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  16. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  17. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  18. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P < 0.005) and intermediate density sites (AUC = 0.81, P < 0.005). The best cut-off value for intensity to predict intermediate density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  19. The models of the life cycle of a computer system

    Directory of Open Access Journals (Sweden)

    Sorina-Carmen Luca

    2006-01-01

    Full Text Available The paper presents a comparative study on the patterns of the life cycle of a computer system. There are analyzed the advantages of each pattern and presented the graphic schemes that point out each stage and step in the evolution of a computer system. In the end the classifications of the methods of projecting the computer systems are discussed.

  20. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  1. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Science.gov (United States)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  2. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Puzyrkov Dmitry

    2018-01-01

    Full Text Available At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  3. Operating and maintenance experience with computer-based systems in nuclear power plants - A report by the PWG-1 Task Group on Computer-based Systems Important to Safety

    International Nuclear Information System (INIS)

    1998-01-01

    This report was prepared by the Task Group on Computer-based Systems Important to Safety of the Principal Working Group No. 1. Canada had a leading role in this study. Operating and Maintenance Experience with Computer-based Systems in nuclear power plants is essential for improving and upgrading against potential failures. The present report summarises the observations and findings related to the use of digital technology in nuclear power plants. It also makes recommendations for future activities in Member Countries. Continued expansion of digital technology in nuclear power reactor has resulted in new safety and licensing issues, since the existing licensing review criteria were mainly based on the analogue devices used when the plants were designed. On the industry side, a consensus approach is needed to help stabilise and standardise the treatment of digital installations and upgrades while ensuring safety and reliability. On the regulatory side, new guidelines and regulatory requirements are needed to assess digital upgrades. Upgrades or new installation issues always involve potential for system failures. They are addressed specifically in the 'hazard' or 'failure' analysis, and it is in this context that they ultimately are resolved in the design and addressed in licensing. Failure Analysis is normally performed in parallel with the design, verification and validation (V and V), and implementation activities of the upgrades. Current standards and guidelines in France, U.S. and Canada recognise the importance of failure analysis in computer-based system design. Thus failure analysis is an integral part of the design and implementation process and is aimed at evaluating potential failure modes and cause of system failures. In this context, it is essential to define 'System' as the plant system affected by the upgrade, not the 'Computer' system. The identified failures would provide input to the design process in the form of design requirements or design

  4. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  5. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  6. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  7. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  8. TMX-U computer system in evolution

    International Nuclear Information System (INIS)

    Casper, T.A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-01-01

    Over the past three years, the total TMX-U diagnsotic data base has grown to exceed 10 megabytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 minutes per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the paralled acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate

  9. New Site Coefficients and Site Classification System Used in Recent Building Seismic Code Provisions

    Science.gov (United States)

    Dobry, R.; Borcherdt, R.D.; Crouse, C.B.; Idriss, I.M.; Joyner, W.B.; Martin, G.R.; Power, M.S.; Rinne, E.E.; Seed, R.B.

    2000-01-01

    Recent code provisions for buildings and other structures (1994 and 1997 NEHRP Provisions, 1997 UBC) have adopted new site amplification factors and a new procedure for site classification. Two amplitude-dependent site amplification factors are specified: Fa for short periods and Fv for longer periods. Previous codes included only a long period factor S and did not provide for a short period amplification factor. The new site classification system is based on definitions of five site classes in terms of a representative average shear wave velocity to a depth of 30 m (V?? s). This definition permits sites to be classified unambiguously. When the shear wave velocity is not available, other soil properties such as standard penetration resistance or undrained shear strength can be used. The new site classes denoted by letters A - E, replace site classes in previous codes denoted by S1 - S4. Site classes A and B correspond to hard rock and rock, Site Class C corresponds to soft rock and very stiff / very dense soil, and Site Classes D and E correspond to stiff soil and soft soil. A sixth site class, F, is defined for soils requiring site-specific evaluations. Both Fa and Fv are functions of the site class, and also of the level of seismic hazard on rock, defined by parameters such as Aa and Av (1994 NEHRP Provisions), Ss and S1 (1997 NEHRP Provisions) or Z (1997 UBC). The values of Fa and Fv decrease as the seismic hazard on rock increases due to soil nonlinearity. The greatest impact of the new factors Fa and Fv as compared with the old S factors occurs in areas of low-to-medium seismic hazard. This paper summarizes the new site provisions, explains the basis for them, and discusses ongoing studies of site amplification in recent earthquakes that may influence future code developments.

  10. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  11. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  12. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  13. ROCK-CAD - computer aided geological modelling system

    International Nuclear Information System (INIS)

    Saksa, P.

    1995-12-01

    The study discusses surface and solid modelling methods, their use and interfacing with geodata. Application software named ROCK-CAD suitable for geological bedrock modelling has been developed with support from Teollisuuden Voima Oy (TVO). It has been utilized in the Finnish site characterization programme for spent nuclear fuel waste disposal during the 1980s and 1990s. The system is based on the solid modelling technique. It comprises also rich functionality for the particular geological modelling scheme. The ROCK-CAD system provides, among other things, varying graphical vertical and horizontal intersections and perspective illustrations. The specially developed features are the application of the boundary representation modelling method, parametric object generation language and the discipline approach. The ROCK-CAD system has been utilized in modelling spatial distribution of rock types and fracturing structures in TVO's site characterization. The Olkiluoto site at Eurajoki serves as an example case. The study comprises the description of the modelling process, models and illustration examples. The utilization of bedrock models in site characterization, in tentative repository siting as well as in groundwater flow simulation is depicted. The application software has improved the assessment of the sites studied, given a new basis for the documentation of interpretation and modelling work, substituted hand-drawing and enabled digital transfer to numerical analysis. Finally, aspects of presentation graphics in geological modelling are considered. (84 refs., 30 figs., 11 tabs.)

  14. A computer-controlled conformal radiotherapy system. IV: Electronic chart

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; McShan, Daniel L.; Matrone, Gwynne M.; Weaver, Tamar A.; Lewis, James D.; Kessler, Marc L.

    1995-01-01

    Purpose: The design and implementation of a system for electronically tracking relevant plan, prescription, and treatment data for computer-controlled conformal radiation therapy is described. Methods and Materials: The electronic charting system is implemented on a computer cluster coupled by high-speed networks to computer-controlled therapy machines. A methodical approach to the specification and design of an integrated solution has been used in developing the system. The electronic chart system is designed to allow identification and access of patient-specific data including treatment-planning data, treatment prescription information, and charting of doses. An in-house developed database system is used to provide an integrated approach to the database requirements of the design. A hierarchy of databases is used for both centralization and distribution of the treatment data for specific treatment machines. Results: The basic electronic database system has been implemented and has been in use since July 1993. The system has been used to download and manage treatment data on all patients treated on our first fully computer-controlled treatment machine. To date, electronic dose charting functions have not been fully implemented clinically, requiring the continued use of paper charting for dose tracking. Conclusions: The routine clinical application of complex computer-controlled conformal treatment procedures requires the management of large quantities of information for describing and tracking treatments. An integrated and comprehensive approach to this problem has led to a full electronic chart for conformal radiation therapy treatments

  15. Computer Application Systems at the University.

    Science.gov (United States)

    Bazewicz, Mieczyslaw

    1979-01-01

    The results of the WASC Project at the Technical University of Wroclaw have confirmed the possibility of constructing informatic systems based on the recognized size and specifics of user's needs (needs of the university) and provided some solutions to the problem of collaboration of computer systems at remote universities. (Author/CMV)

  16. Computing for Decentralized Systems (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  17. Computing for Decentralized Systems (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  18. Selection and implementation of a laboratory computer system.

    Science.gov (United States)

    Moritz, V A; McMaster, R; Dillon, T; Mayall, B

    1995-07-01

    The process of selection of a pathology computer system has become increasingly complex as there are an increasing number of facilities that must be provided and stringent performance requirements under heavy computing loads from both human users and machine inputs. Furthermore, the continuing advances in software and hardware technology provide more options and innovative new ways of tackling problems. These factors taken together pose a difficult and complex set of decisions and choices for the system analyst and designer. The selection process followed by the Microbiology Department at Heidelberg Repatriation Hospital included examination of existing systems, development of a functional specification followed by a formal tender process. The successful tenderer was then selected using predefined evaluation criteria. The successful tenderer was a software development company that developed and supplied a system based on a distributed network using a SUN computer as the main processor. The software was written using Informix running on the UNIX operating system. This represents one of the first microbiology systems developed using a commercial relational database and fourth generation language. The advantages of this approach are discussed.

  19. Intelligent systems and soft computing for nuclear science and industry

    International Nuclear Information System (INIS)

    Ruan, D.; D'hondt, P.; Govaerts, P.; Kerre, E.E.

    1996-01-01

    The second international workshop on Fuzzy Logic and Intelligent Technologies in Nuclear Science (FLINS) addresses topics related to intelligent systems and soft computing for nuclear science and industry. The proceedings contain 52 papers in different fields such as radiation protection, nuclear safety (human factors and reliability), safeguards, nuclear reactor control, production processes in the fuel cycle, dismantling, waste and disposal, decision making, and nuclear reactor control. A clear link is made between theory and applications of fuzzy logic such as neural networks, expert systems, robotics, man-machine interfaces, and decision-support techniques by using modern and advanced technologies and tools. The papers are grouped in three sections. The first section (Soft computing techniques) deals with basic tools to treat fuzzy logic, neural networks, genetic algorithms, decision-making, and software used for general soft-computing aspects. The second section (Intelligent engineering systems) includes contributions on engineering problems such as knowledge-based engineering, expert systems, process control integration, diagnosis, measurements, and interpretation by soft computing. The third section (Nuclear applications) focusses on the application of soft computing and intelligent systems in nuclear science and industry

  20. 75 FR 76487 - Haldex Brake Corporation, Commercial Vehicle Systems, Including On-Site Leased Workers of...

    Science.gov (United States)

    2010-12-08

    ..., Commercial Vehicle Systems, Including On-Site Leased Workers of Johnston Integration Technologies, a... Adjustment Assistance In accordance with Section 223 of the Trade Act of 1974, as amended (``Act''), 19 U.S.C... system components. The company reports that workers leased from Johnston Integration Technologies, a...

  1. Computer-aided instruction system

    International Nuclear Information System (INIS)

    Teneze, Jean Claude

    1968-01-01

    This research thesis addresses the use of teleprocessing and time sharing by the RAX IBM system and the possibility to introduce a dialog with the machine to develop an application in which the computer plays the role of a teacher for different pupils at the same time. Two operating modes are thus exploited: a teacher-mode and a pupil-mode. The developed CAI (computer-aided instruction) system comprises a checker to check the course syntax in teacher-mode, a translator to trans-code the course written in teacher-mode into a form which can be processes by the execution programme, and the execution programme which presents the course in pupil-mode

  2. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  3. Specialized computer system to diagnose critical lined equipment

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Morozova, O. A.; Nedelkin, A. A.

    2018-05-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors propose and describe the structure of the specialized computer system to diagnose critical lined equipment. The relative results of diagnosing lining condition by the basic system and the proposed specialized computer system are presented. To automate evaluation of lining condition and support in making decisions regarding the operation mode of the lined equipment, the specialized software has been developed.

  4. Massively parallel computation of PARASOL code on the Origin 3800 system

    International Nuclear Information System (INIS)

    Hosokawa, Masanari; Takizuka, Tomonori

    2001-10-01

    The divertor particle simulation code named PARASOL simulates open-field plasmas between divertor walls self-consistently by using an electrostatic PIC method and a binary collision Monte Carlo model. The PARASOL parallelized with MPI-1.1 for scalar parallel computer worked on Intel Paragon XP/S system. A system SGI Origin 3800 was newly installed (May, 2001). The parallel programming was improved at this switchover. As a result of the high-performance new hardware and this improvement, the PARASOL is speeded up by about 60 times with the same number of processors. (author)

  5. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  6. Fail-safe design criteria for computer-based reactor protection systems

    International Nuclear Information System (INIS)

    Keats, A.B.

    1980-01-01

    The increasing quantity and complexity of the instrumentation required in nuclear power plants provides a strong incentive for using on-line computers as the basis of the control and protection systems. On-line computers using multiplexed sampled data are already well established but their application to nuclear reactor protection systems requires special measures to satisfy the very high reliability which is demanded in the interests of safety and availability. Some existing codes of practice relating to segregation of replicated subsysttems continue to be applicable and lead to division of the computer functions into two distinct parts. The first computer, referred to as the Trip Algorithm Computer may also control the multiplexer. Voting on each group of status inputs yielded by the trip algorithm computers is performed by the Vote Algorithm Computer. The conceptual disparities between hardwired reactor-protection systems and those employing computers also rise to a need for some new criteria. An important objective of these criteria, minimising the need for a failure-mode-and-effect-analysis of the computer software, but is achieved almost entirely by 'hardware' properties of the system: the systematic use of hardwired test inputs which cause excursions of the trip algorithms into the tripped state in a uniquely ordered but easily recognisable sequence, and the use of hardwired 'pattern recognition logic' which generates a dynamic 'healthy' stimulus for the shutdown actuators only in response to the unique sequence generated by the hardwired input signal pattern. The adoption of the proposed design criteria ensure not only failure-to-safety in the hardware but the elimination, or at least minimisation, of the dependence on the correct functioning of the computer software for the safety system. (auth)

  7. Compact, open-architecture computed radiography system

    International Nuclear Information System (INIS)

    Huang, H.K.; Lim, A.; Kangarloo, H.; Eldredge, S.; Loloyan, M.; Chuang, K.S.

    1990-01-01

    Computed radiography (CR) was introduced in 1982, and its basic system design has not changed. Current CR systems have certain limitations: spatial resolution and signal-to-noise ratios are lower than those of screen-film systems, they are complicated and expensive to build, and they have a closed architecture. The authors of this paper designed and implemented a simpler, lower-cost, compact, open-architecture CR system to overcome some of these limitations. The open-architecture system is a manual-load-single-plate reader that can fit on a desk top. Phosphor images are stored in a local disk and can be sent to any other computer through standard interfaces. Any manufacturer's plate can be read with a scanning time of 90 second for a 35 x 43-cm plate. The standard pixel size is 174 μm and can be adjusted for higher spatial resolution. The data resolution is 12 bits/pixel over an x-ray exposure range of 0.01-100 mR

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  9. Some interactive factors affecting trench-cover integrity on low-level waste sites

    International Nuclear Information System (INIS)

    Hakonson, T.E.; Lane, L.J.; Steger, J.G.; DePoorter, G.L.

    1982-01-01

    This paper describes important mechanisms by which radionuclide can be transported from low-level waste disposal sites into biological pathways, discuss interactions of abiotic and biotic processes, and recommends environmental characteristics that should be measured to design sites that minimize this transport. Past experience at shallow land burial sites for low-level radioactive wastes suggest that occurrences of waste exposure and radionuclide transport are often related to inadequate trench cover designs. Meeting performance standards at low-level waste sites can only be achieved by recognizing that physical, chemical, and biological processes operating on and in a trench cover profile are highly interactive. Failure to do so can lead to improper design criteria and subsequent remedial action procedures that can adversely affect site stability. Based upon field experiments and computer modeling, recommendations are made on site characteristics that require measurement in order to design systems that reduce surface runoff and erosion, manage soil moisture and biota in the cover profile to maximize evapotranspiration and minimize percolation, and place bounds on the intrusion potential of plants and animals into the waste material. Major unresolved problems include developing probabilistic approaches that include climatic variability, improved knowledge of soil-water-plant-erosion relationships, development of practical vegetation establishment and maintenance procedures, prediction and quantification of site potential and plant succession, and understanding the interaction of processes occurring on and in the cover profile with deeper subsurface processes

  10. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  11. 1st International Conference on Computational Advancement in Communication Circuits and Systems

    CERN Document Server

    Dalapati, Goutam; Banerjee, P; Mallick, Amiya; Mukherjee, Moumita

    2015-01-01

    This book comprises the proceedings of 1st International Conference on Computational Advancement in Communication Circuits and Systems (ICCACCS 2014) organized by Narula Institute of Technology under the patronage of JIS group, affiliated to West Bengal University of Technology. The conference was supported by Technical Education Quality Improvement Program (TEQIP), New Delhi, India and had technical collaboration with IEEE Kolkata Section, along with publication partner by Springer. The book contains 62 refereed papers that aim to highlight new theoretical and experimental findings in the field of Electronics and communication engineering including interdisciplinary fields like Advanced Computing, Pattern Recognition and Analysis, Signal and Image Processing. The proceedings cover the principles, techniques and applications in microwave & devices, communication & networking, signal & image processing, and computations & mathematics & control. The proceedings reflect the conference’s emp...

  12. On Spoken English Phoneme Evaluation Method Based on Sphinx-4 Computer System

    Directory of Open Access Journals (Sweden)

    Li Qin

    2017-12-01

    Full Text Available In oral English learning, HDPs (phonemes that are hard to be distinguished are areas where Chinese students frequently make mistakes in pronunciation. This paper studies a speech phoneme evaluation method for HDPs, hoping to improve the ability of individualized evaluation on HDPs and help provide a personalized learning platform for English learners. First of all, this paper briefly introduces relevant phonetic recognition technologies and pronunciation evaluation algorithms and also describes the phonetic retrieving, phonetic decoding and phonetic knowledge base in the Sphinx-4 computer system, which constitute the technological foundation for phoneme evaluation. Then it proposes an HDP evaluation model, which integrates the reliability of the speech processing system and the individualization of spoken English learners into the evaluation system. After collecting HDPs of spoken English learners and sorting them into different sets, it uses the evaluation system to recognize these HDP sets and at last analyzes the experimental results of HDP evaluation, which proves the effectiveness of the HDP evaluation model.

  13. Computation On dP Type power System Stabilizer Using Fuzzy Logic

    International Nuclear Information System (INIS)

    Iskandar, M.A.; Irwan, R.; Husdi; Riza; Mardhana, E.; Triputranto, A.

    1997-01-01

    Power system stabilizers (PSS) are widely applied in power generators to damp power oscillation caused by certain disturbances in order to increase the power supply capacity. PSS design is often suffered from the difficulty on setting periodically its parameters, which are gain and compensators, in order to have an optimal damping characteristic. This paper proposes a methode to determine parameters of dP type PSS by implementing fuzzy logic rules in a computer program,to obtain the appropriate characteristics of synchronous torque and damping torque. PSS with the calculated parameters is investigated on a simulation using a non-linear electric power system of a thermal generator connected to infinite bus system model. Simulation results show that great improvement in damping characteristic and enhancement of stability margin of electric power system are obtained by using the proposed PSS

  14. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  15. Quantum computing on encrypted data.

    Science.gov (United States)

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  16. Site enforcement tracking system (SETS): PRP listing by site for region 9

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  17. Site enforcement tracking system (SETS): PRP listing by site for region 8

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  18. Site enforcement tracking system (SETS): PRP listing by site for region 10

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  19. Site enforcement tracking system (SETS): PRP listing by site for region 3

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  20. Site enforcement tracking system (SETS): PRP listing by site for region 2

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  1. Site enforcement tracking system (SETS): PRP listing by site for region 5

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  2. Site enforcement tracking system (SETS): PRP listing by site for region 6

    International Nuclear Information System (INIS)

    1993-04-01

    When expending Superfund monies at a CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) site, EPA must conduct a search to identify parties with potential financial responsibility for remediation of uncontrolled hazardous waste sites. EPA regional Superfund Waste Management Staff issue a notice letter to the potentially responsible party (PRP). Data from the notice letter is used to form the Site Enforcement Tracking System (SETS). The data includes PRP name and address, a company contact person, the date the notice was issued, and the related CERCLA site name and identification number

  3. Evaluation of reliability of on-site A.C. power systems based on maintenance records

    Energy Technology Data Exchange (ETDEWEB)

    Basso, G.; Pia, S. [ENEA/TERM/VAOEC, C.R.E. Casaccla via Anguillarese, 00100 Roma/Rome (Italy); Fusari, W. [ENEL, Rome (Italy); Soressi, G.; Vaccari, G. [ENEL, Centro di Ricerca Termica e Nucl., Via Rubattino, 54, 1-20134 Mllano/Milan (Italy)

    1986-02-15

    To the end of ascertain in what extent the evaluation of reliability of emergency diesel generators (D.G.) can be improved by means of a deeper knowledge of their operating history a study has been carried-out on 21 D.G. sets: 4 D.G. of the Caorso nuclear plant (BWR, 870 MWe) and 17 D.G. in service at 6 steam-electric fossil-fuelled plants. The major points of interest resulting from this study are: 1) reliability assessments of A.C. on-site power Systems, made on the basis of outcomes of surveillance tests, may lead to results which overestimate the real performance. 2) the unreliability of a redundant System of stand-by components is determined in large extent by unavailabilities due to scheduled and unscheduled maintenance, latent failures, tests. (authors)

  4. Evaluation of reliability of on-site A.C. power systems based on maintenance records

    International Nuclear Information System (INIS)

    Basso, G.; Pia, S.; Fusari, W.; Soressi, G.; Vaccari, G.

    1986-01-01

    To the end of ascertain in what extent the evaluation of reliability of emergency diesel generators (D.G.) can be improved by means of a deeper knowledge of their operating history a study has been carried-out on 21 D.G. sets: 4 D.G. of the Caorso nuclear plant (BWR, 870 MWe) and 17 D.G. in service at 6 steam-electric fossil-fuelled plants. The major points of interest resulting from this study are: 1) reliability assessments of A.C. on-site power Systems, made on the basis of outcomes of surveillance tests, may lead to results which overestimate the real performance. 2) the unreliability of a redundant System of stand-by components is determined in large extent by unavailabilities due to scheduled and unscheduled maintenance, latent failures, tests. (authors)

  5. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  6. PEP computer control system

    International Nuclear Information System (INIS)

    1979-03-01

    This paper describes the design and performance of the computer system that will be used to control and monitor the PEP storage ring. Since the design is essentially complete and much of the system is operational, the system is described as it is expected to 1979. Section 1 of the paper describes the system hardware which includes the computer network, the CAMAC data I/O system, and the operator control consoles. Section 2 describes a collection of routines that provide general services to applications programs. These services include a graphics package, data base and data I/O programs, and a director programm for use in operator communication. Section 3 describes a collection of automatic and semi-automatic control programs, known as SCORE, that contain mathematical models of the ring lattice and are used to determine in real-time stable paths for changing beam configuration and energy and for orbit correction. Section 4 describes a collection of programs, known as CALI, that are used for calibration of ring elements

  7. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  8. Computational Approach to Profit Optimization of a Loss-Queueing System

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar Yadav

    2010-01-01

    Full Text Available Objective of the paper is to deal with the profit optimization of a loss queueing system with the finite capacity. Here, we define and compute total expected cost (TEC, total expected revenue (TER and consequently we compute the total optimal profit (TOP of the system. In order to compute the total optimal profit of the system, a computing algorithm has been developed and a fast converging N-R method has been employed which requires least computing time and lesser memory space as compared to other methods. Sensitivity analysis and its observations based on graphics have added a significant value to this model.

  9. A teleoperated system for remote site characterization

    International Nuclear Information System (INIS)

    Sandness, G.A.; Richardson, B.S.; Pence, J.

    1993-08-01

    The detection and characterization of buried objects and materials is an important first step in the restoration of burial sites containing chemical and radioactive waste materials at Department of Energy (DOE) and Department of Defense (DOD) facilities. To address the need to minimize the exposure of on-site personnel to the hazards associated with such sites, the DOE Office of Technology Development and the US Army Environmental Center have jointly supported the development of the Remote Characterization System (RCS). One of the main components of the RCS is a small remotely driven survey vehicle that can transport various combinations of geophysical and radiological sensors. Currently implemented sensors include ground-penetrating radar, magnetometers, an electromagnetic induction sensor, and a sodium iodide radiation detector. The survey vehicle was constructed predominantly of non-metallic materials to minimize its effect on the operation of its geophysical sensors. The system operator controls the vehicle from a remote, truck-mounted, base station. Video images are transmitted to the base station by an radio link to give the operator necessary visual information. Vehicle control commands, tracking information, and sensor data are transmitted between the survey vehicle and the base station by means of a radio ethernet link. Precise vehicle tracking coordinates are provided by a differential Global Positioning System (GPS). The sensors are environmentally protected, internally cooled, and interchangeable based on mission requirements. To date, the RCS has been successfully tested at the Oak Ridge National Laboratory and the Idaho National Engineering Laboratory

  10. Computer control system of TARN-2

    International Nuclear Information System (INIS)

    Watanabe, S.

    1989-01-01

    The CAMAC interface system is employed in order to regulate the power supply, beam diagnostic and so on. Five CAMAC stations are located in the TARN-2 area and are linked with a serial highway system. The CAMAC serial highway is driven by a serial highway driver, Kinetic 3992, which is housed in the CAMAC powered crate and regulated by two successive methods. One is regulated by the mini computer through the standard branch-highway crate controller, named Type-A2, and the other is regulated with the microcomputer through the auxiliary crate controller. The CAMAC serial highway comprises the two-way optical cables with a total length of 300 m. Each CAMAC station has the serial and auxiliary crate controllers so as to realize alternative control with the local computer system. Interpreter, INSBASIC, is used in the main control computer. There are many kinds of the 'device control function' of the INSBASIC. Because the 'device control function' implies physical operating procedure of such a device, only knowledge of the logical operating procedure is required. A touch panel system is employed to regulate the complicated control flow without any knowledge of the usage of the device. A rotary encoder system, which is analogous to the potentiometer operation, is also available for smooth adjustment of the setting parameter. (author)

  11. 21 CFR 892.1200 - Emission computed tomography system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Emission computed tomography system. 892.1200... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1200 Emission computed tomography system. (a) Identification. An emission computed tomography system is a device intended to detect the...

  12. Computer Education with "Retired" Industrial Systems.

    Science.gov (United States)

    Nesin, Dan; And Others

    1980-01-01

    Describes a student-directed computer system revival project in the Electrical and Computer Engineering department at California State Polytechnic University, which originated when an obsolete computer was donated to the department. Discusses resulting effects in undergraduate course offerings, in student extracurricular activities, and in…

  13. A programming system for bubble chamber photographs measuring tables on-line to a computer

    International Nuclear Information System (INIS)

    Miche, Roger.

    1975-06-01

    A programming system available on an industrial computer, type PDP 15/20, performing the exploitation of bubble chamber pictures with the projection tables on line to the computer was developed. This system must suit the particular conditions met in the analysis of photographs from different bubble chambers, the different stage for dealing with the views (scanning, premeasurement, measurement) adapting to different strategies in the handling of measurements. The exploitation of photographs takes place in a conversational mode to which a concrete form is given by sending messages to the operators at the tables and by receiving coded answers. In this framework, the aims of the operating system are: to guide the operator work at the tables while allowing them to interrupt the normal sequence of events, to carry out some elementary logical checks, to write on magnetic tape the checked data with the appropriate labels as required [fr

  14. Integrated experiment activity monitoring for wLCG sites based on GWT

    International Nuclear Information System (INIS)

    Feijóo, Alejandro Guinó; Espinal, Xavier

    2011-01-01

    The goal of this work is to develop a High Level Monitoring (HLM) where to merge the distributed computing activities of an LHC experiment (ATLAS). ATLAS distributed computing is organized in clouds, where the Tier-Is (primary centers) provide services to the associated Tier-2s centers (secondaries) so they are all seen as a cloud by the experiment. Computing activities and sites stability monitoring services are numerous and delocalized. It would be very useful for a cloud manager to have a single place where to aggregate available monitoring information. The idea presented in this paper is to develop a set of collectors to gather information regarding site status and performance on data distribution, data processing and Worldwide LHC Computing Grid (WLCG) tests (Service Availability Monitoring), store them in specific databases, process the results and show it in a single HLM page. Once having it, one can investigate further by interacting with the front-end, which is fed by the stats stored on databases.

  15. Configuration system development of site and environmental information for radwaste disposal facility

    International Nuclear Information System (INIS)

    Park, Se-Moon; Yoon, Bong-Yo; Kim, Chang-Lak

    2005-01-01

    License for the nuclear facilities such as radioactive waste repository demands documents of site characterization, environmental assessment and safety assessment. This performance will produce bulk of the relevant data. For the safe management of radioactive waste repository, data of the site and environment have to be collected and managed systematically. Particularly for the radwaste repository, which has to be institutionally controlled for a long period after closure, the data will be collected and maintained through the monitoring programme. To meet this requirement, a new programme called 'Site Information and Total Environmental data management System (SITES)' has been developed. The scope and function of the SITES is issued in data DB, safety assessment and monitoring system. In this respect, SITES is designed with two modules of the SITES Database Module (SDM) and the Monitoring and Assesment (M and A). The SDM module is composed of three sub-modules. One is the Site Information Management System (SIMS), which manages data of site characterization such as topography, geology, hydrogeology, engineering geology, etc. The other is the ENVironmental Information management System (ENVIS) and Radioactive ENVironmental Information management System (RENVIS), which manage environmental data required for environmental assessment performance. ENVIS and RENVIS covered almost whole items of environmental assessment report required by Korean government. The SDM was constructed based on Entity Relationship Diagram produced from each item. Also using ArcGIS with the spatial characteristics of the data, it enables groundwater and water property monitoring networks, etc. To be analyzed in respect of every theme. The sub-modules of M and A called the Site and Environment Monitoring System (SEMS) and the Safety Assessment System (SAS) were developed. SEMS was designed to manage the inspection records of the individual measuring instruments and facilities, and the on

  16. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  17. On several computer-oriented studies

    International Nuclear Information System (INIS)

    Takahashi, Ryoichi

    1982-01-01

    To utilize fully digital techniques for solving various difficult problems, nuclear engineers have recourse to computer-oriented approaches. The current trend, in such fields as optimization theory, control system theory and computational fluid dynamics reflect the ability to use computers to obtain numerical solutions to complex problems. Special purpose computers will be used as the integral part of the solving system to process a large amount of data, to implement a control law and even to produce a decision-making. Many problem-solving systems designed in the future will incorporate special-purpose computers as system component. The optimum use of computer system is discussed: why are energy model, energy data base and a big computer used; why will the economic process-computer be allocated to nuclear plants in the future; why should the super-computer be demonstrated at once. (Mori, K.)

  18. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  19. A computer - aided system for the the E.D.F. 1400 MW. Nuclear power plants control

    International Nuclear Information System (INIS)

    Beltranda, G.; Philipps, C.

    1988-01-01

    The future E.D.F. 1400 MW nuclear power plants (due to be commissioned in 1991 at CHOOZ) are provided with a control and instrumentation system including the following levels: - sensors and actuators (LEVEL 0): this is the interface of the elementary acquisition and control signals; - the programmable logical and numerical controllers (LEVEL 1) for the logical control sequences and analog adjustment sequences for the whole equipment of the facilities; - the control room (LEVEL 2) including the computer-aided operation system as well as the wall mimic diagram and the auxiliary panel directly connected to the controllers. This is the processing and control conversational level; - the maintenance and site computer-aided systems (LEVEL 3). This paper aims at describing the computer-aided operation system (called KIC N4), its main functions, its architecture and the solutions retained as regards its softwares and the high-quality of data required. The achievement of this system has been entrusted by EDF to the SEMA. METRA/CIMSA-SINTRA grouping, among which SEMA.METRA is the leading company

  20. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously