WorldWideScience

Sample records for computer systems betterment

  1. The Guide to Better Hospital Computer Decisions

    Science.gov (United States)

    Dorenfest, Sheldon I.

    1981-01-01

    A soon-to-be-published major study of hospital computer use entitled “The Guide to Better Hospital Computer Decisions” was conducted by my firm over the past 2½ years. The study required over twenty (20) man years of effort at a cost of over $300,000, and the six (6) volume final report provides more than 1,000 pages of data about how hospitals are and will be using computerized medical and business information systems. It describes the current status and future expectations for computer use in major application areas, such as, but not limited to, finance, admitting, pharmacy, laboratory, data collection and hospital or medical information systems. It also includes profiles of over 100 companies and other types of organizations providing data processing products and services to hospitals. In this paper, we discuss the need for the study, the specific objectives of the study, the methodology and approach taken to complete the study and a few major conclusions.

  2. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  3. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  4. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  5. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  6. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  7. Quantum Computing's Classical Problem, Classical Computing's Quantum Problem

    OpenAIRE

    Van Meter, Rodney

    2013-01-01

    Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...

  8. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  9. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    Energy Technology Data Exchange (ETDEWEB)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V. [Institute of Informatics Problems, Russian Academy of Sciences (Russian Federation); Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S. [Telecommunication Systems Department, Peoples’ Friendship University of Russia (Russian Federation)

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  10. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    International Nuclear Information System (INIS)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-01-01

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures

  11. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  12. Terahertz Computed Tomography of NASA Thermal Protection System Materials

    Science.gov (United States)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2011-01-01

    A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.

  13. UCH 3 and 4 plant computer system I/O point summary

    International Nuclear Information System (INIS)

    Sohn, Kwang Young; Lee, Tae Hoon; Lee, Soon Sung; Lee, Byung Chae; Yoon, Jong Keon; Park, Jeong Suk; Baek, Seung Min; Shin, Hyun Kook

    1996-05-01

    This technical report summarizes the UCN 3 and 4 I/O database points and is expected to be an important for many disciplines. There are several kind of plant tests before the commercial operation such as Preoperational Test, Cold Hydro Test (CHT), Hot Functional Test (HFT), and Power Ascension Test (PAT). Those are performed in a manner that the validity of the sensor inputs got to the Plant Computer System (PCS) and operational integrity of plant are determined by monitoring the addressable I/O point identification (PID) on the Plant Computer System operator console. For better performance of activities like Emergency Operating Procedure (EOP) computerization, Safety Parameter Display System (SPDS) development, and organizing integrated database for NSSS, referencing the past plant information about I/O database is highly expected. What's more, it is inevitable material for plant system research and general design document work to be done in future. So we present this report based on UCN database for better understanding of plant computer system. 5 refs. (Author) .new

  14. UCH 3 and 4 plant computer system I/O point summary

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Kwang Young; Lee, Tae Hoon; Lee, Soon Sung; Lee, Byung Chae; Yoon, Jong Keon; Park, Jeong Suk; Baek, Seung Min; Shin, Hyun Kook [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-05-01

    This technical report summarizes the UCN 3 and 4 I/O database points and is expected to be an important for many disciplines. There are several kind of plant tests before the commercial operation such as Preoperational Test, Cold Hydro Test (CHT), Hot Functional Test (HFT), and Power Ascension Test (PAT). Those are performed in a manner that the validity of the sensor inputs got to the Plant Computer System (PCS) and operational integrity of plant are determined by monitoring the addressable I/O point identification (PID) on the Plant Computer System operator console. For better performance of activities like Emergency Operating Procedure (EOP) computerization, Safety Parameter Display System (SPDS) development, and organizing integrated database for NSSS, referencing the past plant information about I/O database is highly expected. What`s more, it is inevitable material for plant system research and general design document work to be done in future. So we present this report based on UCN database for better understanding of plant computer system. 5 refs. (Author) .new.

  15. Darwinian Spacecraft: Soft Computing Strategies Breeding Better, Faster Cheaper

    Science.gov (United States)

    Noever, David A.; Baskaran, Subbiah

    1999-01-01

    Computers can create infinite lists of combinations to try to solve a particular problem, a process called "soft-computing." This process uses statistical comparables, neural networks, genetic algorithms, fuzzy variables in uncertain environments, and flexible machine learning to create a system which will allow spacecraft to increase robustness, and metric evaluation. These concepts will allow for the development of a spacecraft which will allow missions to be performed at lower costs.

  16. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  17. A study on the nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Kim, Yeon Seung; Huh, Young Hwan; Lee, Jong Bok; Choi, Young Gil; Suh, Soong Hyok; Kang, Byong Heon; Kim, Hee Kyung; Kim, Ko Ryeo; Park, Soo Jin

    1990-12-01

    According to current software development and quality assurance trends. It is necessary to develop computer code management system for nuclear programs. For this reason, the project started in 1987. Main objectives of the project are to establish a nuclear computer code management system, to secure software reliability, and to develop nuclear computer code packages. Contents of performing the project in this year were to operate and maintain computer code information system of KAERI computer codes, to develop application tool, AUTO-i, for solving the 1st and 2nd moments of inertia on polygon or circle, and to research nuclear computer code conversion between different machines. For better supporting the nuclear code availability and reliability, assistance from users who are using codes is required. Lastly, for easy reference about the codes information, we presented list of code names and information on the codes which were introduced or developed during this year. (Author)

  18. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  19. Converting differential-equation models of biological systems to membrane computing.

    Science.gov (United States)

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  1. Hardening cookies in web-based systems for better system integrity

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Mohd Dzul Aiman Aslan; Saaidi Ismail; Abdul Aziz Mohd Ramli; Abdul Muin Abdul Rahman; Siti Nurbahyah Hamdan; Norlelawati Hashimuddin; Sufian Norazam Mohamed Aris

    2012-01-01

    IT Center (ITC) as technical support and provider for most of web-based systems in Nuclear Malaysia has conducted a study to investigate cookie vulnerability in a system for better integrity. A part of the result has found that cookies in a web-based system in Nuclear Malaysia can be easily manipulated. The main objective of the study is to harden the vulnerability of the cookies. Two levels of security procedures have been used and enforced which consist of 1) Penetration test (Pen Test) 2) Hardening procedure. In one of the system, study has found that 121 attempts threats have been detected after the hardening enforcement from 23 March till 20 September 2012. At this stage, it can be concluded that cookie vulnerability in the system has been hardened and integrity has been assured after the enforcement. This paper describes in detail the penetration and hardening process of cookie vulnerability for better supporting web-based system in Nuclear Malaysia. (author)

  2. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  3. Better than $1/Mflops substained: a scalable PC-based parallel computer for lattice QCD

    International Nuclear Information System (INIS)

    Fodor, Z.; Papp, G.

    2002-02-01

    We study the feasibility of a PC-based parallel computer for medium to large scale lattice QCD simulations. Our cluster built at the Eoetvoes Univ., Inst. Theor. Phys. consists of 137 Intel P4-1.7 GHz nodes with 512 MB RDRAM. The 32-bit, single precision sustained performance for dynamical QCD without communication is 1510 Mflops/node with Wilson and 970 Mflops/node with staggered fermions. This gives a total performance of 208 Gflops for Wilson and 133 Gflops for staggered QCD, respectively (for 64-bit applications the performance is approximately halved). The novel feature of our system is its communication architecture. In order to have a scalable, cost-effective machine we use Gigabit Ethernet cards for nearest-neighbor communications in a two-dimensional mesh. This type of communication is cost effective (only 30% of the hardware costs is spent on the communication). According to our benchmark measurements this type of communication results in around 40% communication time fraction for lattices upto 48 3 . 96 in full QCD simulations. The price/sustained-perfomance ratio for full QCD is better than $1/Mflops for Wilson (and around $1.5/Mflops for staggered) quarks for practically any lattice size, which can fit in our parallel computer. (orig.)

  4. Applications of Raspberry Pi computers in the education system

    OpenAIRE

    STARICHENKO EVGENIY BORISOVICH

    2015-01-01

    The article puts forward a thesis about the possibility and necessity of using single-board computer Raspberry Pi in education. The main advantages of Raspberry Pi are small size, low power consumption, a complete operating system, free software and low price. Due to the presence of general-purpose input/output ports it allows one to program real devices, physical systems and objects, which allows a better visual representation of the results of the work than creation of virtual images on the...

  5. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  6. Optimistic Selection Rule Better Than Majority Voting System

    Science.gov (United States)

    Sugiyama, Takuya; Obata, Takuya; Hoki, Kunihito; Ito, Takeshi

    A recently proposed ensemble approach to game-tree search has attracted a great deal of attention. The ensemble system consists of M computer players, where each player uses a different series of pseudo-random numbers. A combination of multiple players under the majority voting system would improve the performance of a Shogi-playing computer. We present a new strategy of move selection based on the search values of a number of players. The move decision is made by selecting one player from all M players. Each move is selected by referring to the evaluation value of the tree search of each player. The performance and mechanism of the strategy are examined. We show that the optimistic selection rule, which selects the player that yields the highest evaluation value, outperforms the majority voting system. By grouping 16 or more computer players straightforwardly, the winning rates of the strongest Shogi programs increase from 50 to 60% or even higher.

  7. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  8. Health Information System in a Cloud Computing Context.

    Science.gov (United States)

    Sadoughi, Farahnaz; Erfannia, Leila

    2017-01-01

    Healthcare as a worldwide industry is experiencing a period of growth based on health information technology. The capabilities of cloud systems make it as an option to develop eHealth goals. The main objectives of the present study was to evaluate the advantages and limitations of health information systems implementation in a cloud-computing context that was conducted as a systematic review in 2016. Science direct, Scopus, Web of science, IEEE, PubMed and Google scholar were searched according study criteria. Among 308 articles initially found, 21 articles were entered in the final analysis. All the studies had considered cloud computing as a positive tool to help advance health technology, but none had insisted too much on its limitations and threats. Electronic health record systems have been mostly studied in the fields of implementation, designing, and presentation of models and prototypes. According to this research, the main advantages of cloud-based health information systems could be categorized into the following groups: economic benefits and advantages of information management. The main limitations of the implementation of cloud-based health information systems could be categorized into the 4 groups of security, legal, technical, and human restrictions. Compared to earlier studies, the present research had the advantage of dealing with the issue of health information systems in a cloud platform. The high frequency of studies conducted on the implementation of cloud-based health information systems revealed health industry interest in the application of this technology. Security was a subject discussed in most studies due to health information sensitivity. In this investigation, some mechanisms and solutions were discussed concerning the mentioned systems, which would provide a suitable area for future scientific research on this issue. The limitations and solutions discussed in this systematic study would help healthcare managers and decision

  9. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  10. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  11. Computer-assisted design/computer-assisted manufacturing systems: A revolution in restorative dentistry

    Directory of Open Access Journals (Sweden)

    Arbaz Sajjad

    2016-01-01

    Full Text Available For the better part of the past 20 years, dentistry has seen the development of many new all-ceramic materials and restorative techniques fueled by the desire to capture the ever elusive esthetic perfection. This has resulted in the fusion of the latest in material science and the pen ultimate in computer-assisted design/computer-assisted manufacturing (CAD/CAM technology. This case report describes the procedure for restoring the esthetic appearance of both the left and right maxillary peg-shaped lateral incisors with a metal-free sintered finely structured feldspar ceramic material using the latest laboratory CAD/CAM system. The use of CAD/CAM technology makes it possible to produce restorations faster with precision- fit and good esthetics overcoming the errors associated with traditional ceramo-metal technology. The incorporation of this treatment modality would mean that the dentist working procedures will have to be adapted in the methods of CAD/CAM technology.

  12. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  13. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  14. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  15. New computer systems

    International Nuclear Information System (INIS)

    Faerber, G.

    1975-01-01

    Process computers have already become indespensable technical aids for monitoring and automation tasks in nuclear power stations. Yet there are still some problems connected with their use whose elimination should be the main objective in the development of new computer systems. In the paper, some of these problems are summarized, new tendencies in hardware development are outlined, and finally some new systems concepts made possible by the hardware development are explained. (orig./AK) [de

  16. Belle computing system

    International Nuclear Information System (INIS)

    Adachi, Ichiro; Hibino, Taisuke; Hinz, Luc; Itoh, Ryosuke; Katayama, Nobu; Nishida, Shohei; Ronga, Frederic; Tsukamoto, Toshifumi; Yokoyama, Masahiko

    2004-01-01

    We describe the present status of the computing system in the Belle experiment at the KEKB e+e- asymmetric-energy collider. So far, we have logged more than 160fb-1 of data, corresponding to the world's largest data sample of 170M BB-bar pairs at the -bar (4S) energy region. A large amount of event data has to be processed to produce an analysis event sample in a timely fashion. In addition, Monte Carlo events have to be created to control systematic errors accurately. This requires stable and efficient usage of computing resources. Here, we review our computing model and then describe how we efficiently proceed DST/MC productions in our system

  17. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  18. Expert-systems and computer-based industrial systems

    International Nuclear Information System (INIS)

    Terrien, J.F.

    1987-01-01

    Framatome makes wide use of expert systems, computer-assisted engineering, production management and personnel training. It has set up separate business units and subsidiaries and also participates in other companies which have the relevant expertise. Five examples of the products and services available in these are discussed. These are in the field of applied artificial intelligence and expert-systems, in integrated computer-aid design and engineering, structural analysis, computer-related products and services and document management systems. The structure of the companies involved and the work they are doing is discussed. (UK)

  19. Use of digital computing devices in systems important to safety

    International Nuclear Information System (INIS)

    1986-01-01

    The incorporation of digital computing devices in systems important to safety now is progressing fast in several countries, including Canada, France, Federal Republic of Germany, Japan, USA. There are now reactors with microprocessors in some trip systems. The major functions of those systems are: reactor trip initiation, display, monitoring, testing, re-calibration of detectors. The benefits of moving to a fully computerized shut-down system should be improved reliability, greater flexibility, better man-machine interface, improved testing, higher reactor output and lower overall cost. With the introduction of computer devices in systems important to safety, plant availability and safety are improved because disturbances are treated before they lead to safety action, in this way helping the operator to avoid errors. The Meeting presentations were divided into sessions devoted to the following topics: Needs for the use of digital devices (DCD) in safety important systems (SIS) (5 papers); Problems raised by the integration SIS in the NPP control (7 papers); Description and presentation of DCD of SIS (6 papers); Results of experiences in engineering, manufacture, qualification operation of DCD hardware and software (5 papers). A separate abstract was prepared for each of these papers

  20. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  1. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  2. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  3. Computer system operation

    International Nuclear Information System (INIS)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A.

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new

  4. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  5. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  6. Electronic security systems better ways to crime prevention

    CERN Document Server

    Walker, Philip

    2013-01-01

    Electronic Security Systems: Better Ways to Crime Prevention teaches the reader about the application of electronics for security purposes through the use of case histories, analogies, anecdotes, and other related materials. The book is divided into three parts. Part 1 covers the concepts behind security systems - its objectives, limitations, and components; the fundamentals of space detection; detection of intruder movement indoors and outdoors; surveillance; and alarm communication and control. Part 2 discusses equipments involved in security systems such as the different types of sensors,

  7. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.

    2009-10-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  8. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.; Plank, G.; Vigmond, E.J.; Whiteley, J.P.

    2009-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  9. Systems analysis and the computer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, A S

    1983-08-01

    The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.

  10. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  11. Computer control system of TRISTAN

    International Nuclear Information System (INIS)

    Kurokawa, Shin-ichi; Shinomoto, Manabu; Kurihara, Michio; Sakai, Hiroshi.

    1984-01-01

    For the operation of a large accelerator, it is necessary to connect an enormous quantity of electro-magnets, power sources, vacuum equipment, high frequency accelerator and so on and to control them harmoniously. For the purpose, a number of computers are adopted, and connected with a network, in this way, a large computer system for laboratory automation which integrates and controls the whole system is constructed. As a distributed system of large scale, the functions such as electro-magnet control, file processing and operation control are assigned to respective computers, and the total control is made feasible by network connection, at the same time, as the interface with controlled equipment, the CAMAC (computer-aided measurement and control) is adopted to ensure the flexibility and the possibility of expansion of the system. Moreover, the language ''NODAL'' having network support function was developed so as to easily make software without considering the composition of more complex distributed system. The accelerator in the TRISTAN project is composed of an electron linear accelerator, an accumulation ring of 6 GeV and a main ring of 30 GeV. Two ring type accelerators must be synchronously operated as one body, and are controlled with one computer system. The hardware and software are outlined. (Kako, I.)

  12. Computer-enhanced laparoscopic training system (CELTS): bridging the gap.

    Science.gov (United States)

    Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L

    2004-05-01

    There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a

  13. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  14. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  15. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  16. Secure computing on reconfigurable systems

    OpenAIRE

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the attestation of the executed functions. The use of SC on reconfigurable devices has the advantage of being highly adaptable to the application and the user requirements, while providing high performa...

  17. Impact of new computing systems on finite element computations

    International Nuclear Information System (INIS)

    Noor, A.K.; Fulton, R.E.; Storaasi, O.O.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified

  18. Retrofitting of NPP Computer systems

    International Nuclear Information System (INIS)

    Pettersen, G.

    1994-01-01

    Retrofitting of nuclear power plant control rooms is a continuing process for most utilities. This involves introducing and/or extending computer-based solutions for surveillance and control as well as improving the human-computer interface. The paper describes typical requirements when retrofitting NPP process computer systems, and focuses on the activities of Institute for energieteknikk, OECD Halden Reactor project with respect to such retrofitting, using examples from actual delivery projects. In particular, a project carried out for Forsmarksverket in Sweden comprising upgrade of the operator system in the control rooms of units 1 and 2 is described. As many of the problems of retrofitting NPP process computer systems are similar to such work in other kinds of process industries, an example from a non-nuclear application area is also given

  19. A Pharmacy Computer System

    OpenAIRE

    Claudia CIULCA-VLADAIA; Călin MUNTEAN

    2009-01-01

    Objective: Describing a model of evaluation seen from a customer’s point of view for the current needed pharmacy computer system. Data Sources: literature research, ATTOFARM, WINFARM P.N.S., NETFARM, Info World - PHARMACY MANAGER and HIPOCRATE FARMACIE. Study Selection: Five Pharmacy Computer Systems were selected due to their high rates of implementing at a national level. We used the new criteria recommended by EUROREC Institute in EHR that modifies the model of data exchanges between the E...

  20. Computable Types for Dynamic Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); K. Ambos-Spies; B. Loewe; W. Merkle

    2009-01-01

    textabstractIn this paper, we develop a theory of computable types suitable for the study of dynamic systems in discrete and continuous time. The theory uses type-two effectivity as the underlying computational model, but we quickly develop a type system which can be manipulated abstractly, but for

  1. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  2. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  3. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  4. Computer based training for NPP personnel (interactive communication systems and functional trainers)

    International Nuclear Information System (INIS)

    Martin, H.D.

    1987-01-01

    KWU as a manufacturer of thermal and nuclear power plants has extensive customer training obligations within its power plant contracts. In this respect KWU has gained large experience in training of personnel, in the production of training material including video tapes an in the design of simulators. KWU developed interactive communication systems (ICS) for training and retraining purposes with a personal computer operating a video disc player on which video instruction is stored. The training program is edited with the help of a self developed editing system which enables the author to easily enter his instructions into the computer. ICS enables the plant management to better monitor the performance of its personnel through computerized training results and helps to save training manpower. German NPPs differ very much from other designs with respect to a more complex and integrated reactor control system and an additional reactor limitation system. Simulators for such plants therefore have also to simulate these systems. KWU developed a Functional Trainer (FT) which is a replica of the primary system, the auxiliary systems linked to it and the associated control, limitation and protection systems including the influences of the turbine operation and control

  5. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry; Lau, Sonie

    Intelligent computational systems can be described as an adaptive computational system integrating both traditional computational approaches and artificial intelligence (AI) methodologies to meet the science and engineering data processing requirements imposed by specific mission objectives. These systems will be capable of integrating, interpreting, and understanding sensor input information; correlating that information to the "world model" stored within its data base and understanding the differences, if any; defining, verifying, and validating a command sequence to merge the "external world" with the "internal world model"; and, controlling the vehicle and/or platform to meet the scientific and engineering mission objectives. Performance and simulation data obtained to date indicate that the current flight processors baselined for many missions such as Space Station Freedom do not have the computational power to meet the challenges of advanced automation and robotics systems envisioned for the year 2000 era. Research issues which must be addressed to achieve greater than giga-flop performance for on-board intelligent computational systems have been identified, and a technology development program has been initiated to achieve the desired long-term system performance objectives.

  6. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  7. Development of a computer-aided digital reactivity computer system for PWRs

    International Nuclear Information System (INIS)

    Chung, S.-K.; Sung, K.-Y.; Kim, D.; Cho, D.-Y.

    1993-01-01

    Reactor physics tests at initial startup and after reloading are performed to verify nuclear design and to ensure safety operation. Two kinds of reactivity computers, analog and digital, have been widely used in the pressurized water reactor (PWR) core physics test. The test data of both reactivity computers are displayed only on the strip chart recorder, and these data are managed by hand so that the accuracy of the test results depends on operator expertise and experiences. This paper describes the development of the computer-aided digital reactivity computer system (DRCS), which is enhanced by system management software and an improved system for the application of the PWR core physics test

  8. Computer information systems framework

    International Nuclear Information System (INIS)

    Shahabuddin, S.

    1989-01-01

    Management information systems (MIS) is a commonly used term in computer profession. The new information technology has caused management to expect more from computer. The process of supplying information follows a well defined procedure. MIS should be capable for providing usable information to the various areas and levels of organization. MIS is different from data processing. MIS and business hierarchy provides a good framework for many organization which are using computers. (A.B.)

  9. Computer-aided protective system (CAPS)

    International Nuclear Information System (INIS)

    Squire, R.K.

    1988-01-01

    A method of improving the security of materials in transit is described. The system provides a continuously monitored position location system for the transport vehicle, an internal computer-based geographic delimiter that makes continuous comparisons of actual positions with the preplanned routing and schedule, and a tamper detection/reaction system. The position comparison is utilized to institute preprogrammed reactive measures if the carrier is taken off course or schedule, penetrated, or otherwise interfered with. The geographic locater could be an independent internal platform or an external signal-dependent system utilizing GPS, Loran or similar source of geographic information; a small (micro) computer could provide adequate memory and computational capacity; the insurance of integrity of the system indicates the need for a tamper-proof container and built-in intrusion sensors. A variant of the system could provide real-time transmission of the vehicle position and condition to a central control point for; such transmission could be encrypted to preclude spoofing

  10. Configurating computer-controlled bar system

    OpenAIRE

    Šuštaršič, Nejc

    2010-01-01

    The principal goal of my diploma thesis is creating an application for configurating computer-controlled beverages dispensing systems. In the preamble of my thesis I present the theoretical platform for point of sale systems and beverages dispensing systems, which are required for the understanding of the target problematics. As with many other fields, computer tehnologies entered the field of managing bars and restaurants quite some time ago. Basic components of every bar or restaurant a...

  11. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  12. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  13. Computer systems and nuclear industry

    International Nuclear Information System (INIS)

    Nkaoua, Th.; Poizat, F.; Augueres, M.J.

    1999-01-01

    This article deals with computer systems in nuclear industry. In most nuclear facilities it is necessary to handle a great deal of data and of actions in order to help plant operator to drive, to control physical processes and to assure the safety. The designing of reactors requires reliable computer codes able to simulate neutronic or mechanical or thermo-hydraulic behaviours. Calculations and simulations play an important role in safety analysis. In each of these domains, computer systems have progressively appeared as efficient tools to challenge and master complexity. (A.C.)

  14. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  15. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  16. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  17. Indirection and computer security.

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Michael J.

    2011-09-01

    The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

  18. 'Micro-8' micro-computer system

    International Nuclear Information System (INIS)

    Yagi, Hideyuki; Nakahara, Yoshinori; Yamada, Takayuki; Takeuchi, Norio; Koyama, Kinji

    1978-08-01

    The micro-computer Micro-8 system has been developed to organize a data exchange network between various instruments and a computer group including a large computer system. Used for packet exchangers and terminal controllers, the system consists of ten kinds of standard boards including a CPU board with INTEL-8080 one-chip-processor. CPU architecture, BUS architecture, interrupt control, and standard-boards function are explained in circuit block diagrams. Operations of the basic I/O device, digital I/O board and communication adapter are described with definitions of the interrupt ramp status, I/O command, I/O mask, data register, etc. In the appendixes are circuit drawings, INTEL-8080 micro-processor specifications, BUS connections, I/O address mappings, jumper connections of address selection, and interface connections. (author)

  19. Cyber Security on Nuclear Power Plant's Computer Systems

    International Nuclear Information System (INIS)

    Shin, Ick Hyun

    2010-01-01

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  20. Introduction of library administration system using an office computer in the smallscale library

    International Nuclear Information System (INIS)

    Itabashi, Keizo; Ishikawa, Masashi

    1984-01-01

    Research Information Center was established in Fusion Research Center at Naka site as a new section of Department of Technical Information of Japan Atomic Energy Research Institute. A library materials management system utilizing an office computer was introduced to provide good services. The system is a total-system centered on services at counter except purchase business and the serviced materials are books, reports, journals and pamphlets. The system has produced good effects on many aspects, e.g. a significantly easy inventory of library materials, and complete removal of user's handwriting for borrowing materials, by using an optical chracter recognition handscanner. Those improvements have resulted in better image of the library. (author)

  1. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  2. Torness computer system turns round data

    International Nuclear Information System (INIS)

    Dowler, E.; Hamilton, J.

    1989-01-01

    The Torness nuclear power station has two advanced gas-cooled reactors. A key feature is the distributed computer system which covers both data processing and auto-control. The complete computer system has over 80 processors with 45000 digital and 22000 analogue input signals. The on-line control and monitoring systems includes operating systems, plant data acquisition and processing, alarm and event detection, communications software, process management systems and database management software. Some features of the system are described. (UK)

  3. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  4. Computer-based control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Kalashnikov, V.K.; Shugam, R.A.; Ol'shevsky, Yu.N.

    1975-01-01

    Computer-based control systems of nuclear power plants may be classified into those using computers for data acquisition only, those using computers for data acquisition and data processing, and those using computers for process control. In the present paper a brief review is given of the functions the systems above mentioned perform, their applications in different nuclear power plants, and some of their characteristics. The trend towards hierarchic systems using control computers with reserves already becomes clear when consideration is made of the control systems applied in the Canadian nuclear power plants that pertain to the first ones equipped with process computers. The control system being now under development for the large Soviet reactors of WWER type will also be based on the use of control computers. That part of the system concerned with controlling the reactor assembly is described in detail

  5. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  6. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  7. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  8. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  9. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  11. The Northeast Utilities generic plant computer system

    International Nuclear Information System (INIS)

    Spitzner, K.J.

    1980-01-01

    A variety of computer manufacturers' equipment monitors plant systems in Northeast Utilities' (NU) nuclear and fossil power plants. The hardware configuration and the application software in each of these systems are essentially one of a kind. Over the next few years these computer systems will be replaced by the NU Generic System, whose prototype is under development now for Millstone III, an 1150 Mwe Pressurized Water Reactor plant being constructed in Waterford, Connecticut. This paper discusses the Millstone III computer system design, concentrating on the special problems inherent in a distributed system configuration such as this. (auth)

  12. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  13. Computing for Decentralized Systems (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  14. Computing for Decentralized Systems (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  15. Computer-aided instruction system

    International Nuclear Information System (INIS)

    Teneze, Jean Claude

    1968-01-01

    This research thesis addresses the use of teleprocessing and time sharing by the RAX IBM system and the possibility to introduce a dialog with the machine to develop an application in which the computer plays the role of a teacher for different pupils at the same time. Two operating modes are thus exploited: a teacher-mode and a pupil-mode. The developed CAI (computer-aided instruction) system comprises a checker to check the course syntax in teacher-mode, a translator to trans-code the course written in teacher-mode into a form which can be processes by the execution programme, and the execution programme which presents the course in pupil-mode

  16. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  17. Introduction of library administration system using an office computer in the smallscale library

    Energy Technology Data Exchange (ETDEWEB)

    Itabashi, Keizo; Ishikawa, Masashi

    1984-01-01

    Research Information Center was established in Fusion Research Center at Naka site as a new section of Department of Technical Information of Japan Atomic Energy Research Institute. A library materials management system utilizing an office computer was introduced to provide good services. The system is a total-system centered on services at counter except purchase business and the serviced materials are books, reports, journals and pamphlets. The system has produced good effects on many aspects, e.g. a significantly easy inventory of library materials, and complete removal of user's handwriting for borrowing materials, by using an optical chracter recognition handscanner. Those improvements have resulted in better image of the library.

  18. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  19. Operating experience of the TPA-1001 mini-computer in experimental control systems of main synchrophasotron parameters

    International Nuclear Information System (INIS)

    Kazanskij, G.S.; Khoshenko, A.A.

    1978-01-01

    The experience of application of a Mini-computer, TPA-1001 to control the basic parameters of a synchrophasotron is discussed. The available data have shown that the efficiency of a computer management and measurement system (CMMS) for an accelerator can be determined as a trade-off between the accelerator and the system reliability, and betWeen the system mobility and its softWare. At present, the system employs two VT-340 display units, an arithmetic unit and an accelerating frequency measurement loop. In addition, the system memory is expanded up to 12 K. A new interactive program has been developed which enables the user to interact with the system Via three units (a teletype and two display units). An accelerating frequency measuring and control flowchart has been implemented and covers the whole duty cycle, while its measuring accuracy is better than 4x10 -4

  20. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  1. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  2. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  3. PEP computer control system

    International Nuclear Information System (INIS)

    1979-03-01

    This paper describes the design and performance of the computer system that will be used to control and monitor the PEP storage ring. Since the design is essentially complete and much of the system is operational, the system is described as it is expected to 1979. Section 1 of the paper describes the system hardware which includes the computer network, the CAMAC data I/O system, and the operator control consoles. Section 2 describes a collection of routines that provide general services to applications programs. These services include a graphics package, data base and data I/O programs, and a director programm for use in operator communication. Section 3 describes a collection of automatic and semi-automatic control programs, known as SCORE, that contain mathematical models of the ring lattice and are used to determine in real-time stable paths for changing beam configuration and energy and for orbit correction. Section 4 describes a collection of programs, known as CALI, that are used for calibration of ring elements

  4. 21 CFR 892.1200 - Emission computed tomography system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Emission computed tomography system. 892.1200... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1200 Emission computed tomography system. (a) Identification. An emission computed tomography system is a device intended to detect the...

  5. Computer Education with "Retired" Industrial Systems.

    Science.gov (United States)

    Nesin, Dan; And Others

    1980-01-01

    Describes a student-directed computer system revival project in the Electrical and Computer Engineering department at California State Polytechnic University, which originated when an obsolete computer was donated to the department. Discusses resulting effects in undergraduate course offerings, in student extracurricular activities, and in…

  6. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  7. Development of real-time visualization system for Computational Fluid Dynamics on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    1998-03-01

    A real-time visualization system for computational fluid dynamics in a network connecting between a parallel computing server and the client terminal was developed. Using the system, a user can visualize the results of a CFD (Computational Fluid Dynamics) simulation on the parallel computer as a client terminal during the actual computation on a server. Using GUI (Graphical User Interface) on the client terminal, to user is also able to change parameters of the analysis and visualization during the real-time of the calculation. The system carries out both of CFD simulation and generation of a pixel image data on the parallel computer, and compresses the data. Therefore, the amount of data from the parallel computer to the client is so small in comparison with no compression that the user can enjoy the swift image appearance comfortably. Parallelization of image data generation is based on Owner Computation Rule. GUI on the client is built on Java applet. A real-time visualization is thus possible on the client PC only if Web browser is implemented on it. (author)

  8. Framework for computer-aided systems design

    International Nuclear Information System (INIS)

    Esselman, W.H.

    1992-01-01

    Advanced computer technology, analytical methods, graphics capabilities, and expert systems contribute to significant changes in the design process. Continued progress is expected. Achieving the ultimate benefits of these computer-based design tools depends on successful research and development on a number of key issues. A fundamental understanding of the design process is a prerequisite to developing these computer-based tools. In this paper a hierarchical systems design approach is described, and methods by which computers can assist the designer are examined. A framework is presented for developing computer-based design tools for power plant design. These tools include expert experience bases, tutorials, aids in decision making, and tools to develop the requirements, constraints, and interactions among subsystems and components. Early consideration of the functional tasks is encouraged. Methods of acquiring an expert's experience base is a fundamental research problem. Computer-based guidance should be provided in a manner that supports the creativity, heuristic approaches, decision making, and meticulousness of a good designer

  9. Philosophy of a computer-automated counting system

    International Nuclear Information System (INIS)

    Perry, D.G.; Giesler, G.C.

    1979-01-01

    The LAMPF Nuclear Chemistry computer system is designed to provide both real-time control of data acquisition and facilities for data processing for a large variety of users. It is a PDP-11/34 connected to a parallel CAMAC branch highway as well as a large variety of peripherals. The philosophy for the design of this system is discussed; such points as use of the computer for control only versus direct data acquisition by the computer, why a CAMAC system was chosen, and the advantages and disadvantages of this system are covered. Also discussed are future expansion of the system and what might be done differently if the system were redesigned. 3 figures

  10. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  11. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    International Nuclear Information System (INIS)

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software

  12. Computer-Mediated Communications Systems: Will They Catch On?

    Science.gov (United States)

    Cook, Dave; Ridley, Michael

    1990-01-01

    Describes the use of CoSy, a computer conferencing system, by academic librarians at McMaster University in Ontario. Computer-mediated communications systems (CMCS) are discussed, the use of the system for electronic mail and computer conferencing is described, the perceived usefulness of CMCS is examined, and a sidebar explains details of the…

  13. Computer system for nuclear power plant parameter display

    International Nuclear Information System (INIS)

    Stritar, A.; Klobuchar, M.

    1990-01-01

    The computer system for efficient, cheap and simple presentation of data on the screen of the personal computer is described. The display is in alphanumerical or graphical form. The system can be used for the man-machine interface in the process monitoring system of the nuclear power plant. It represents the third level of the new process computer system of the Nuclear Power Plant Krsko. (author)

  14. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  15. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis Arm

    Directory of Open Access Journals (Sweden)

    Wiedjaja A

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  16. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels......Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  17. Replacement of the JRR-3 computer system

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Tomoaki; Kobayashi, Kenichi; Suwa, Masayuki; Mineshima, Hiromi; Sato, Mitsugu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-10-01

    The JRR-3 computer system contributes to stable operation of JRR-3 since 1990. But now about 10 years have passed since it was designed and some problems have occurred. Under these situations, we should replace the old computer system for safe and stable operation. In this replacement, the system is improved as regards man-machine interface and efficiency about maintenance. The new system consists of three functions, which are 'the function of management for operation information' (renewal function), 'the function of management for facility information' (new function) and the function of management for information publication' (new function). By this replacement, New JRR-3 computer system can contribute to safe and stable operation. (author)

  18. Replacement of the JRR-3 computer system

    International Nuclear Information System (INIS)

    Kato, Tomoaki; Kobayashi, Kenichi; Suwa, Masayuki; Mineshima, Hiromi; Sato, Mitsugu

    2000-01-01

    The JRR-3 computer system contributes to stable operation of JRR-3 since 1990. But now about 10 years have passed since it was designed and some problems have occurred. Under these situations, we should replace the old computer system for safe and stable operation. In this replacement, the system is improved as regards man-machine interface and efficiency about maintenance. The new system consists of three functions, which are 'the function of management for operation information' (renewal function), 'the function of management for facility information' (new function) and the function of management for information publication' (new function). By this replacement, New JRR-3 computer system can contribute to safe and stable operation. (author)

  19. Computational Design of Batteries from Materials to Systems

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Santhanagopalan, Shriram [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yang, Chuanbo [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Graf, Peter A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Usseglio Viretta, Francois L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Li, Qibo [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Finegan, Donal [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pesaran, Ahmad A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yao, Koffi (Pierre) [Argonne National Laboratory; Abraham, Daniel [Argonne National Laboratory; Dees, Dennis [Argonne National Laboratory; Jansen, Andy [Argonne National Laboratory; Mukherjee, Partha [Texas A& M University; Mistry, Aashutosh [Texas A& M University; Verma, Ankit [Texas A& M University; Lamb, Josh [Sandia National Laboratories; Darcy, Eric [NASA

    2017-09-01

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  20. Industrial applications of formal methods to model, design and analyze computer systems

    CERN Document Server

    Craigen, Dan

    1995-01-01

    Formal methods are mathematically-based techniques, often supported by reasoning tools, that can offer a rigorous and effective way to model, design and analyze computer systems. The purpose of this study is to evaluate international industrial experience in using formal methods. The cases selected are representative of industrial-grade projects and span a variety of application domains. The study had three main objectives: · To better inform deliberations within industry and government on standards and regulations; · To provide an authoritative record on the practical experience of formal m

  1. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  3. DYMAC computer system

    International Nuclear Information System (INIS)

    Hagen, J.; Ford, R.F.

    1979-01-01

    The DYnamic Materials ACcountability program (DYMAC) has been monitoring nuclear material at the Los Alamos Scientific Laboratory plutonium processing facility since January 1978. This paper presents DYMAC's features and philosophy, especially as reflected in its computer system design. Early decisions and tradeoffs are evaluated through the benefit of a year's operating experience

  4. Better Assessments Require Better Assessment Literacy

    Science.gov (United States)

    Stiggins, Rick

    2018-01-01

    Stiggins says that, to build better assessment systems, educators and education leaders need more opportunities to learn the basic principles of sound assessment practice. He lays out what he views as the fundamental elements of assessment literacy.

  5. Wind power systems. Applications of computational intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lingfeng [Toledo Univ., OH (United States). Dept. of Electrical Engineering and Computer Science; Singh, Chanan [Texas A and M Univ., College Station, TX (United States). Electrical and Computer Engineering Dept.; Kusiak, Andrew (eds.) [Iowa Univ., Iowa City, IA (United States). Mechanical and Industrial Engineering Dept.

    2010-07-01

    Renewable energy sources such as wind power have attracted much attention because they are environmentally friendly, do not produce carbon dioxide and other emissions, and can enhance a nation's energy security. For example, recently more significant amounts of wind power are being integrated into conventional power grids. Therefore, it is necessary to address various important and challenging issues related to wind power systems, which are significantly different from the traditional generation systems. This book is a resource for engineers, practitioners, and decision-makers interested in studying or using the power of computational intelligence based algorithms in handling various important problems in wind power systems at the levels of power generation, transmission, and distribution. Researchers have been developing biologically-inspired algorithms in a wide variety of complex large-scale engineering domains. Distinguished from the traditional analytical methods, the new methods usually accomplish the task through their computationally efficient mechanisms. Computational intelligence methods such as evolutionary computation, neural networks, and fuzzy systems have attracted much attention in electric power systems. Meanwhile, modern electric power systems are becoming more and more complex in order to meet the growing electricity market. In particular, the grid complexity is continuously enhanced by the integration of intermittent wind power as well as the current restructuring efforts in electricity industry. Quite often, the traditional analytical methods become less efficient or even unable to handle this increased complexity. As a result, it is natural to apply computational intelligence as a powerful tool to deal with various important and pressing problems in the current wind power systems. This book presents the state-of-the-art development in the field of computational intelligence applied to wind power systems by reviewing the most up

  6. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  7. Creating Better Library Information Systems: The Road to FRBR-Land

    Science.gov (United States)

    Mercun, Tanja; Švab, Katarina; Harej, Viktor; Žumer, Maja

    2013-01-01

    Introduction: To provide valuable services in the future, libraries will need to create better information systems and set up an infrastructure more in line with the current technologies. The "Functional Requirements for Bibliographic Records" conceptual model provides a basis for this transformation, but there are still a number of…

  8. A computer-controlled conformal radiotherapy system I: overview

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; McShan, Daniel L.; Kessler, Marc L.; Matrone, Gwynne M.; Lewis, James D.; Weaver, Tamar A.

    1995-01-01

    Purpose: Equipment developed for use with computer-controlled conformal radiotherapy (CCRT) treatment techniques, including multileaf collimators and/or computer-control systems for treatment machines, are now available. The purpose of this work is to develop a system that will allow the safe, efficient, and accurate delivery of CCRT treatments as routine clinical treatments, and permit modifications of the system so that the delivery process can be optimized. Methods and Materials: The needs and requirements for a system that can fully support modern computer-controlled treatment machines equipped with multileaf collimators and segmental or dynamic conformal therapy capabilities have been analyzed and evaluated. This analysis has been used to design and then implement a complete approach to the delivery of CCRT treatments. Results: The computer-controlled conformal radiotherapy system (CCRS) described here consists of a process for the delivery of CCRT treatments, and a complex software system that implements the treatment process. The CCRS system described here includes systems for plan transfer, treatment delivery planning, sequencing of the actual treatment delivery process, graphical simulation and verification tools, as well as an electronic chart that is an integral part of the system. The CCRS system has been implemented for use with a number of different treatment machines. The system has been used clinically for more than 2 years to perform CCRT treatments for more than 200 patients. Conclusions: A comprehensive system for the implementation and delivery of computer-controlled conformal radiation therapy (CCRT) plans has been designed and implemented for routine clinical use with multisegment, computer-controlled, multileaf-collimated conformal therapy. The CCRS system has been successfully implemented to perform these complex treatments, and is considered quite important to the clinical use of modern computer-controlled treatment techniques

  9. Cobit system in the audit processes of the systems of computer systems

    Directory of Open Access Journals (Sweden)

    Julio Jhovany Santacruz Espinoza

    2017-12-01

    Full Text Available The present research work has been carried out to show the benefits of the use of the COBIT system in the auditing processes of the computer systems, the problem is related to: How does it affect the process of audits in the institutions, use of the COBIT system? The main objective is to identify the incidence of the use of the COBIT system in the auditing process used by computer systems within both public and private organizations; In order to achieve our stated objectives of the research will be developed first with the conceptualization of key terms for an easy understanding of the subject, as a conclusion: we can say the COBIT system allows to identify the methodology by using information from the IT departments, to determine the resources of the (IT Information Technology, specified in the COBIT system, such as files, programs, computer networks, including personnel that use or manipulate the information, with the purpose of providing information that the organization or company requires to achieve its objectives.

  10. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  11. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  12. Realization of the computation process in the M-6000 computer for physical process automatization systems basing on CAMAC system

    International Nuclear Information System (INIS)

    Antonichev, G.M.; Vesenev, V.A.; Volkov, A.S.; Maslov, V.V.; Shilkin, I.P.; Bespalova, T.V.; Golutvin, I.A.; Nevskaya, N.A.

    1977-01-01

    Software for physical experiments using the CAMAC devices and the M-6000 computer are further developed. The construction principles and operation of the data acquisition system and the system generator are described. Using the generator for the data acquisition system the experimenter realizes the logic for data exchange between the CAMAC devices and the computer

  13. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  14. Promoting better integration of health information systems: best practices and challenges

    NARCIS (Netherlands)

    Michelsen, K.; Brand, H.; Achterberg, P.; Wilkinson, J.

    2015-01-01

    Health Evidence Network Synthesis Report [Promoting better integration of health information systems: best practices and challenges K Michelsen, H Brand, P Achterberg, J Wilkinson - 2015 ... Authors Kai Michelsen Department of International Health, Maastricht University Maastricht,

  15. Designing an Assistant System Encouraging Ergonomic Computer Usage

    Directory of Open Access Journals (Sweden)

    Hüseyin GÜRÜLER

    2017-12-01

    Full Text Available Today, people of almost every age group are users of computers and computer aided systems. Technology makes our life easier, but it can also threaten our health. In recent years, one of the main causes of the proliferation of diseases such as lower back pain, neck pain or hernia, Arthritis, visual disturbances and obesity is wrong computer usage. The widespread use of computers also increases these findings. The purpose of this study is to direct computer users to use computers more carefully in terms of ergonomics. The user-interactive system developed for this purpose controls distance of the user to the screen and calculates the look angle and the time spent looking at the screen and provides audio or text format warning when necessary. It is thought that this system will reduce the health problems caused by the frequency of computer usage by encouraging individuals to use computers ergonomically.

  16. Computer-Mediated Communication Systems

    Directory of Open Access Journals (Sweden)

    Bin Yu

    2011-10-01

    Full Text Available The essence of communication is to exchange and share information. Computers provide a new medium to human communication. CMC system, composed of human and computers, absorbs and then extends the advantages of all former formats of communication, embracing the instant interaction of oral communication, the abstract logics of printing dissemination, and the vivid images of movie and television. It also creates a series of new communication formats, such as Hyper Text, Multimedia etc. which are the information organizing methods, and cross-space message delivering patterns. Benefiting from the continuous development of technique and mechanism, the computer-mediated communication makes the dream of transmitting information cross space and time become true, which will definitely have a great impact on our social lives.

  17. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  18. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  19. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  20. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  1. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  2. Quantum analogue computing.

    Science.gov (United States)

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  3. 21 CFR 892.1750 - Computed tomography x-ray system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Computed tomography x-ray system. 892.1750 Section... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1750 Computed tomography x-ray system. (a) Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to...

  4. Design and Study of a Next-Generation Computer-Assisted System for Transoral Laser Microsurgery

    Directory of Open Access Journals (Sweden)

    Nikhil Deshpande PhD

    2018-05-01

    Full Text Available Objective To present a new computer-assisted system for improved usability, intuitiveness, efficiency, and controllability in transoral laser microsurgery (TLM. Study Design Pilot technology feasibility study. Setting A dedicated room with a simulated TLM surgical setup: surgical microscope, surgical laser system, instruments, ex vivo pig larynxes, and computer-assisted system. Subjects and Methods The computer-assisted laser microsurgery (CALM system consists of a novel motorized laser micromanipulator and a tablet- and stylus-based control interface. The system setup includes the Leica 2 surgical microscope and the DEKA HiScan Surgical laser system. The system was validated through a first-of-its-kind observational study with 57 international surgeons with varied experience in TLM. The subjects performed real surgical tasks on ex vivo pig larynxes in a simulated TLM scenario. The qualitative aspects were established with a newly devised questionnaire assessing the usability, efficiency, and suitability of the system. Results The surgeons evaluated the CALM system with an average score of 6.29 (out of 7 in ease of use and ease of learning, while an average score of 5.96 was assigned for controllability and safety. A score of 1.51 indicated reduced workload for the subjects. Of 57 subjects, 41 stated that the CALM system allows better surgical quality than the existing TLM systems. Conclusions The CALM system augments the usability, controllability, and efficiency in TLM. It enhances the ergonomics and accuracy beyond the current state of the art, potentially improving the surgical safety and quality. The system offers the intraoperative automated scanning of customized long incisions achieving uniform resections at the surgical site.

  5. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...... propose an interesting application to formalisation of hybrid systems. We obtain some class of hybrid systems, which trajectories are computable in the sense of computable analysis. This research was supported in part by the RFBR (grants N 99-01-00485, N 00-01- 00810) and by the Siberian Branch of RAS (a...... grant for young researchers, 2000)...

  6. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  7. Development of a computer design system for HVAC

    International Nuclear Information System (INIS)

    Miyazaki, Y.; Yotsuya, M.; Hasegawa, M.

    1993-01-01

    The development of a computer design system for HVAC (Heating, Ventilating and Air Conditioning) system is presented in this paper. It supports the air conditioning design for a nuclear power plant and a reprocessing plant. This system integrates various computer design systems which were developed separately for the various design phases of HVAC. the purposes include centralizing the HVAC data, optimizing design, and reducing the designing time. The centralized HVAC data are managed by a DBMS (Data Base Management System). The DBMS separates the computer design system into a calculation module and the data. The design system can thus be expanded easily in the future. 2 figs

  8. Interactive Computer-Enhanced Remote Viewing System (ICERVS)

    International Nuclear Information System (INIS)

    1993-08-01

    The Integrated Computer-Enhanced Remote Viewing System (ICERVS) supports the robotic remediation of hazardous environments such as underground storage tanks, buried waste sites, and contaminated production facilities. The success of these remediation missions will depend on reliable geometric descriptions of the work environment in order to achieve effective task planning, path planning, and collision avoidance. ICERVS provides a means for deriving a reliable geometric description more effectively and efficiently than current systems by combining a number of technologies: Sensing of the environment to acquire dimensional and material property data; integration of acquired data into a common data structure (based on octree technology); presentation of data to robotic task planners for analysis and visualization; interactive synthesis of geometric/surface models to denote features of interest in the environment and transfer of this information to robot control and collision avoidance systems. A key feature of ICERVS is that it will enable an operator to match xyz data from a sensor with surface models of the same region in space. This capability will help operators to better manage the complexities of task and path planning in three-dimensional (3D) space, thereby leading to safer and more effective remediation. The Phase 1 work performed by MTI has brought the ICERVS design to Maturity Level 3, Subscale Major Subsystem, and met the established success criteria

  9. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  10. A computable type theory for control systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); L. Guo; J. Baillieul

    2009-01-01

    htmlabstractIn this paper, we develop a theory of computable types suitable for the study of control systems. The theory uses type-two effectivity as the underlying computational model, but we quickly develop a type system which can be manipulated abstractly, but for which all allowable operations

  11. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  12. Plant computer system in nuclear power station

    International Nuclear Information System (INIS)

    Kato, Shinji; Fukuchi, Hiroshi

    1991-01-01

    In nuclear power stations, centrally concentrated monitoring system has been adopted, and in central control rooms, large quantity of information and operational equipments concentrate, therefore, those become the important place of communication between plants and operators. Further recently, due to the increase of the unit capacity, the strengthening of safety, the problems of man-machine interface and so on, it has become important to concentrate information, to automate machinery and equipment and to simplify them for improving the operational environment, reliability and so on. On the relation of nuclear power stations and computer system, to which attention has been paid recently as the man-machine interface, the example in Tsuruga Power Station, Japan Atomic Power Co. is shown. No.2 plant in the Tsuruga Power Station is a PWR plant with 1160 MWe output, which is a home built standardized plant, accordingly the computer system adopted here is explained. The fundamental concept of the central control board, the process computer system, the design policy, basic system configuration, reliability and maintenance, CRT display, and the computer system for No.1 BWR 357 MW plant are reported. (K.I.)

  13. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    International Nuclear Information System (INIS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-01-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  14. Analysis of control room computers at nuclear power plants

    International Nuclear Information System (INIS)

    Leijonhufvud, S.; Lindholm, L.

    1984-03-01

    The following problems are analyzed: - the developing of a system - hardware and software - data - the aquisition of the system - operation and service. The findings are: - most reliability problems can be solved by doubling critical units - reliability in software has a quality that can only be created through development - reliability in computer systems in extremely unusual situations can not be quantified or verified, except possibly for very small and functionally simple systems - to attain the highest possible reliability by such simple systems these have to: - contian one or very few functions - be functionally simple - be application-transparent, viz. the internal function of the system should be independent of the status of the process - a computer system will compete succesfully with other possible systems regarding reliability for the following reasons: - if the function is simple enough for other systems, the dator system would be small - if the functions cannot be realized by other systems - the computer system would complement the human effort - and the man-machine system would be a better solution than no system, possibly better than human function only. (Aa)

  15. Large computer systems and new architectures

    International Nuclear Information System (INIS)

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  16. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  17. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  18. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  19. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  20. ADAM: analysis of discrete models of biological systems using computer algebra.

    Science.gov (United States)

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web

  1. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  2. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  3. Evaluation of computer-based ultrasonic inservice inspection systems

    International Nuclear Information System (INIS)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems

  4. Computer Security: better code, fewer problems

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2016-01-01

    The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten.   The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected.  Not validating that input: you expect a birth date? So why accept letters? &...

  5. Perceptually-Inspired Computing

    Directory of Open Access Journals (Sweden)

    Ming Lin

    2015-08-01

    Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.

  6. The computational challenges of Earth-system science.

    Science.gov (United States)

    O'Neill, Alan; Steenman-Clark, Lois

    2002-06-15

    The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.

  7. Computer security of NPP instrumentation and control systems: categorization

    International Nuclear Information System (INIS)

    Klevtsov, A.L.; Simonov, A.A.; Trubchaninov, S.A.

    2016-01-01

    The paper is devoted to studying categorization of NPP instrumentation and control (I&C) systems from the point of view of computer security and to consideration of the computer security levels and zones used by the International Atomic Energy Agency (IAEA). The paper also describes the computer security degrees and zones regulated by the International Electrotechnical Commission (IEC) standard. The computer security categorization of the systems used by the U.S. Nuclear Regulatory Commission (NRC) is presented. The experts analyzed the main differences in I&C systems computer security categorization accepted by the IAEA, IEC and U.S. NRC. The approaches to categorization that should be advisably used in Ukraine during the development of regulation on NPP I&C systems computer security are proposed in the paper

  8. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  9. Application of computational intelligence in emerging power systems

    African Journals Online (AJOL)

    ... in the electrical engineering applications. This paper highlights the application of computational intelligence methods in power system problems. Various types of CI methods, which are widely used in power system, are also discussed in the brief. Keywords: Power systems, computational intelligence, artificial intelligence.

  10. System-level tools and reconfigurable computing for next-generation HWIL systems

    Science.gov (United States)

    Stark, Derek; McAulay, Derek; Cantle, Allan J.; Devlin, Malachy

    2001-08-01

    Previous work has been presented on the creation of computing architectures called DIME, which addressed the particular computing demands of hardware in the loop systems. These demands include low latency, high data rates and interfacing. While it is essential to have a capable platform for handling and processing of the data streams, the tools must also complement this so that a system's engineer is able to construct their final system. The paper will present the work in the area of integration of system level design tools, such as MATLAB and SIMULINK, with a reconfigurable computing platform. This will demonstrate how algorithms can be implemented and simulated in a familiar rapid application development environment before they are automatically transposed for downloading directly to the computing platform. This complements the established control tools, which handle the configuration and control of the processing systems leading to a tool suite for system development and implementation. As the development tools have evolved the core-processing platform has also been enhanced. These improved platforms are based on dynamically reconfigurable computing, utilizing FPGA technologies, and parallel processing methods that more than double the performance and data bandwidth capabilities. This offers support for the processing of images in Infrared Scene Projectors with 1024 X 1024 resolutions at 400 Hz frame rates. The processing elements will be using the latest generation of FPGAs, which implies that the presented systems will be rated in terms of Tera (1012) operations per second.

  11. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  12. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  13. In Search of a Better Mousetrap: A Look at Higher Education Ranking Systems

    Science.gov (United States)

    Swail, Watson Scott

    2011-01-01

    College rankings create much talk and discussion in the higher education arena. This love/hate relationship has not necessarily resulted in better rankings, but rather, more rankings. This paper looks at some of the measures and pitfalls of the current rankings systems, and proposes areas for improvement through a better focus on teaching and…

  14. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  15. Cyber Security on Nuclear Power Plant's Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ick Hyun [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2010-10-15

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  16. Better Place

    DEFF Research Database (Denmark)

    Rask, Morten; Bakke, Nikolas; Lindhøj, Jan

    Better Place is trying to reshape the automotive industry by shifting transportation from a dependency on oil to a reliance on environmentally friendly renewable energy. Better Place is developing an extensive infrastructure system that will utilise overcapacity in the production of wind power...... among others and that will drive the global transportation industry to becoming driven by electric vehicles (EVs). Better Place does this by selling its customers 'mileage' and a car without a battery. The case highlights the internationalisation process of Better Place from an international business...... perspective in order to encourage a discussion and debate about how Better Place can make their grand vision a reality in the future by overcoming the obstacles that historically have been challenging the rise of the EV industry. The case includes a historical background of the EV industry by using Denmark...

  17. Time-Domain Terahertz Computed Axial Tomography NDE System

    Science.gov (United States)

    Zimdars, David

    2012-01-01

    NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D

  18. Analyzing the security of an existing computer system

    Science.gov (United States)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  19. [A computer-aided image diagnosis and study system].

    Science.gov (United States)

    Li, Zhangyong; Xie, Zhengxiang

    2004-08-01

    The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.

  20. Computer Based Expert Systems.

    Science.gov (United States)

    Parry, James D.; Ferrara, Joseph M.

    1985-01-01

    Claims knowledge-based expert computer systems can meet needs of rural schools for affordable expert advice and support and will play an important role in the future of rural education. Describes potential applications in prediction, interpretation, diagnosis, remediation, planning, monitoring, and instruction. (NEC)

  1. Do flow principles of operations management apply to computing centres?

    CERN Document Server

    Abaunza, Felipe; Hameri, Ari-Pekka; Niemi, Tapio

    2014-01-01

    By analysing large data-sets on jobs processed in major computing centres, we study how operations management principles apply to these modern day processing plants. We show that Little’s Law on long-term performance averages holds to computing centres, i.e. work-in-progress equals throughput rate multiplied by process lead time. Contrary to traditional manufacturing principles, the law of variation does not hold to computing centres, as the more variation in job lead times the better the throughput and utilisation of the system. We also show that as the utilisation of the system increases lead times and work-in-progress increase, which complies with traditional manufacturing. In comparison with current computing centre operations these results imply that better allocation of jobs could increase throughput and utilisation, while less computing resources are needed, thus increasing the overall efficiency of the centre. From a theoretical point of view, in a system with close to zero set-up times, as in the c...

  2. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  3. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  4. Applying improved instrumentation and computer control systems

    International Nuclear Information System (INIS)

    Bevilacqua, F.; Myers, J.E.

    1977-01-01

    In-core and out-of-core instrumentation systems for the Cherokee-I reactor are described. The reactor has 61m-core instrument assemblies. Continuous computer monitoring and processing of data from over 300 fixed detectors will be used to improve the manoeuvering of core power. The plant protection system is a standard package for the Combustion Engineering System 80, consisting of two independent systems, the reactor protection system and the engineering safety features activation system, both of which are designed to meet NRC, ANS and IEEE design criteria or standards. The plants protection system has its own computer which provides plant monitoring, alarming, logging and performance calculations. (U.K.)

  5. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  6. Role of computers in CANDU safety systems

    International Nuclear Information System (INIS)

    Hepburn, G.A.; Gilbert, R.S.; Ichiyen, N.M.

    1985-01-01

    Small digital computers are playing an expanding role in the safety systems of CANDU nuclear generating stations, both as active components in the trip logic, and as monitoring and testing systems. The paper describes three recent applications: (i) A programmable controller was retro-fitted to Bruce ''A'' Nuclear Generating Station to handle trip setpoint modification as a function of booster rod insertion. (ii) A centralized monitoring computer to monitor both shutdown systems and the Emergency Coolant Injection system, is currently being retro-fitted to Bruce ''A''. (iii) The implementation of process trips on the CANDU 600 design using microcomputers. While not truly a retrofit, this feature was added very late in the design cycle to increase the margin against spurious trips, and has now seen about 4 unit-years of service at three separate sites. Committed future applications of computers in special safety systems are also described. (author)

  7. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  8. Computer System Analysis for Decommissioning Management of Nuclear Reactor

    International Nuclear Information System (INIS)

    Nurokhim; Sumarbagiono

    2008-01-01

    Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)

  9. Computational system for geostatistical analysis

    Directory of Open Access Journals (Sweden)

    Vendrusculo Laurimar Gonçalves

    2004-01-01

    Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.

  10. Preventive maintenance for computer systems - concepts & issues ...

    African Journals Online (AJOL)

    Performing preventive maintenance activities for the computer is not optional. The computer is a sensitive and delicate device that needs adequate time and attention to make it work properly. In this paper, the concept and issues on how to prolong the life span of the system, that is, the way to make the system last long and ...

  11. FFTF fission gas monitor computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. A dual computer system has been developed to monitor the reactor cover gas to detect and characterize any fuel or test pin fission gas releases. The system acquires gamma spectra data, identifies isotopes, calculates specific isotope and overall cover gas activity, presents control room alarms and displays, and records and prints data and analysis reports. The fission gas monitor system makes extensive use of commercially available hardware and software, providing a reliable and easily maintained system. The design provides extensive automation of previous manual operations, reducing the need for operator training and minimizing the potential for operator error. The dual nature of the system allows one monitor to be taken out of service for periodic tests or maintenance without interrupting the overall system functions. A built-in calibrated gamma source can be controlled by the computer, allowing the system to provide rapid system self tests and operational performance reports

  12. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  13. Fuzzy systems and soft computing in nuclear engineering

    International Nuclear Information System (INIS)

    Ruan, D.

    2000-01-01

    This book is an organized edited collection of twenty-one contributed chapters covering nuclear engineering applications of fuzzy systems, neural networks, genetic algorithms and other soft computing techniques. All chapters are either updated review or original contributions by leading researchers written exclusively for this volume. The volume highlights the advantages of applying fuzzy systems and soft computing in nuclear engineering, which can be viewed as complementary to traditional methods. As a result, fuzzy sets and soft computing provide a powerful tool for solving intricate problems pertaining in nuclear engineering. Each chapter of the book is self-contained and also indicates the future research direction on this topic of applications of fuzzy systems and soft computing in nuclear engineering. (orig.)

  14. Towards a new PDG computing system

    International Nuclear Information System (INIS)

    Beringer, J; Dahl, O; Zyla, P; Jackson, K; McParland, C; Poon, S; Robertson, D

    2011-01-01

    The computing system that supports the worldwide Particle Data Group (PDG) of over 170 authors in the production of the Review of Particle Physics was designed more than 20 years ago. It has reached its scalability and usability limits and can no longer satisfy the requirements and wishes of PDG collaborators and users alike. We discuss the ongoing effort to modernize the PDG computing system, including requirements, architecture and status of implementation. The new system will provide improved user features and will fully support the PDG collaboration from distributed web-based data entry, work flow management, authoring and refereeing to data verification and production of the web edition and manuscript for the publisher. Cross-linking with other HEP information systems will be greatly improved.

  15. Computer-aided system for cryogenic research facilities

    International Nuclear Information System (INIS)

    Gerasimov, V.P.; Zhelamsky, M.V.; Mozin, I.V.; Repin, S.S.

    1994-01-01

    A computer-aided system is developed for the more effective choice and optimization of the design and manufacturing technologies of the superconductor for the magnet system of the International Thermonuclear Experimental Reactor (ITER) with the aim to ensure the superconductor certification. The computer-aided system provides acquisition, processing, storage and display of data describing the proceeding tests, the detection of any parameter deviations and their analysis. Besides, it generates commands for the equipment switch off in emergency situations. ((orig.))

  16. Improvement of portable computed tomography system for on-field applications

    Science.gov (United States)

    Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.

    2015-05-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.

  17. Improvement of Portable Computed Tomography System for On-field Applications

    International Nuclear Information System (INIS)

    Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.

    2014-01-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.

  18. Improvement of portable computed tomography system for on-field applications

    International Nuclear Information System (INIS)

    Sukrod, K; Khoonkamjorn, P; Tippayakul, C

    2015-01-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field. (paper)

  19. A comparative study of three-dimensional reconstructive images of temporomandibular joint using computed tomogram

    International Nuclear Information System (INIS)

    Lim, Suk Young; Koh, Kwang Joon

    1993-01-01

    The purpose of this study was to clarify the spatial relationship of temporomandibular joint and to an aid in the diagnosis of temporomandibular disorder. For this study, three-dimensional images of normal temporomandibular joint were reconstructed by computer image analysis system and three-dimensional reconstructive program integrated in computed tomography. The obtained results were as follows : 1. Two-dimensional computed tomograms had the better resolution than three dimensional computed tomograms in the evaluation of bone structure and the disk of TMJ. 2. Direct sagittal computed tomograms and coronal computed tomograms had the better resolution in the evaluation of the disk of TMJ. 3. The positional relationship of the disk could be visualized, but the configuration of the disk could not be clearly visualized on three-dimensional reconstructive CT images. 4. Three-dimensional reconstructive CT images had the smoother margin than three-dimensional images reconstructed by computer image analysis system, but the images of the latter had the better perspective. 5. Three-dimensional reconstructive images had the better spatial relationship of the TMJ articulation, and the joint space were more clearly visualized on dissection images.

  20. Mining Department computer systems

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    Describes the main computer systems currently available, or being developed by the Mining Department of the UK National Coal Board. They are primarily for the use of mining and specialist engineers, but some of them have wider applications, particularly in the research and development and management statistics fields.

  1. Operating System Concepts for Reconfigurable Computing: Review and Survey

    Directory of Open Access Journals (Sweden)

    Marcel Eckert

    2016-01-01

    Full Text Available One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The article also presents an overview on published and available operating systems targeting the area of reconfigurable computing. The purpose of this article is to identify and summarize common patterns among those systems that can be seen as de facto standard. Furthermore, open problems, not covered by these already available systems, are identified.

  2. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  3. High Voltage Distribution System (HVDS) as a better system compared to Low Voltage Distribution System (LVDS) applied at Medan city power network

    Science.gov (United States)

    Dinzi, R.; Hamonangan, TS; Fahmi, F.

    2018-02-01

    In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.

  4. Spiral computed tomography assessment of the efficacy of different rotary versus hand retreatment system.

    Science.gov (United States)

    Mittal, Neelam; Jain, Jyoti

    2014-01-01

    The purpose of this study was to evaluate the efficacy of nickel-titanium rotary retreatment systems versus stainless steel hand retreatment system with or without solvent for gutta-percha removal during retreatment. Sixty extracted human mandibular molar teeth with single canal in a distal root was prepared with ProTaper rotary nickel-titanium files and obturated with gutta-percha and sealer. The teeth were randomly divided into six groups of 10 specimens in each groups. The volume of filling material before and after retreatment were evaluated in cm(3) using the computed tomography (CT) scanner proprietary software. Maximum amount of filling material removed during retreatment with ProTaper retreatment system with solvent and minimum with hand retreatment system with solvent. None of the technique was 100% effective in removing the filling materials, but the ProTaper retreatment system with solvent was better.

  5. A computer-aided system for automatic extraction of femur neck trabecular bone architecture using isotropic volume construction from clinical hip computed tomography images.

    Science.gov (United States)

    Vivekanandhan, Sapthagirivasan; Subramaniam, Janarthanam; Mariamichael, Anburajan

    2016-10-01

    Hip fractures due to osteoporosis are increasing progressively across the globe. It is also difficult for those fractured patients to undergo dual-energy X-ray absorptiometry scans due to its complicated protocol and its associated cost. The utilisation of computed tomography for the fracture treatment has become common in the clinical practice. It would be helpful for orthopaedic clinicians, if they could get some additional information related to bone strength for better treatment planning. The aim of our study was to develop an automated system to segment the femoral neck region, extract the cortical and trabecular bone parameters, and assess the bone strength using an isotropic volume construction from clinical computed tomography images. The right hip computed tomography and right femur dual-energy X-ray absorptiometry measurements were taken from 50 south-Indian females aged 30-80 years. Each computed tomography image volume was re-constructed to form isotropic volumes. An automated system by incorporating active contour models was used to segment the neck region. A minimum distance boundary method was applied to isolate the cortical and trabecular bone components. The trabecular bone was enhanced and segmented using trabecular enrichment approach. The cortical and trabecular bone features were extracted and statistically compared with dual-energy X-ray absorptiometry measured femur neck bone mineral density. The extracted bone measures demonstrated a significant correlation with neck bone mineral density (r > 0.7, p computed tomography images scanned with low dose could eventually be helpful in osteoporosis diagnosis and its treatment planning. © IMechE 2016.

  6. Fail-safe computer-based plant protection systems

    International Nuclear Information System (INIS)

    Keats, A.B.

    1983-01-01

    A fail-safe mode of operation for computers used in nuclear reactor protection systems was first evolved in the UK for application to a sodium cooled fast reactor. The fail-safe properties of both the hardware and the software were achieved by permanently connecting test signals to some of the multiplexed inputs. This results in an unambiguous data pattern, each time the inputs are sequentially scanned by the multiplexer. The ''test inputs'' simulate transient excursions beyond defined safe limits. The alternating response of the trip algorithms to the ''out-of-limits'' test signals and the normal plant measurements is recognised by hardwired pattern recognition logic external to the computer system. For more general application to plant protection systems, a ''Test Signal Generator'' (TSG) is used to compute and generate test signals derived from prevailing operational conditions. The TSG, from its knowledge of the sensitivity of the trip algorithm to each of the input variables, generates a ''test disturbance'' which is superimposed upon each variable in turn, to simulate a transient excursion beyond the safe limits. The ''tripped'' status yielded by the trip algorithm when using data from a ''disturbed'' input forms part of a pattern determined by the order in which the disturbances are applied to the multiplexer inputs. The data pattern formed by the interleaved test disturbances is again recognised by logic external to the protection system's computers. This fail-safe mode of operation of computer-based protection systems provides a powerful defence against common-mode failure. It also reduces the importance of software verification in the licensing procedure. (author)

  7. Automatic design of optical systems by digital computer

    Science.gov (United States)

    Casad, T. A.; Schmidt, L. F.

    1967-01-01

    Computer program uses geometrical optical techniques and a least squares optimization method employing computing equipment for the automatic design of optical systems. It evaluates changes in various optical parameters, provides comprehensive ray-tracing, and generally determines the acceptability of the optical system characteristics.

  8. Second Asia-Pacific Conference on the Computer Aided System Engineering

    CERN Document Server

    Chaczko, Zenon; Jacak, Witold; Łuba, Tadeusz; Computational Intelligence and Efficiency in Engineering Systems

    2015-01-01

    This carefully edited and reviewed volume addresses the increasingly popular demand for seeking more clarity in the data that we are immersed in. It offers excellent examples of the intelligent ubiquitous computation, as well as recent advances in systems engineering and informatics. The content represents state-of-the-art foundations for researchers in the domain of modern computation, computer science, system engineering and networking, with many examples that are set in industrial application context. The book includes the carefully selected best contributions to APCASE 2014, the 2nd Asia-Pacific Conference on  Computer Aided System Engineering, held February 10-12, 2014 in South Kuta, Bali, Indonesia. The book consists of four main parts that cover data-oriented engineering science research in a wide range of applications: computational models and knowledge discovery; communications networks and cloud computing; computer-based systems; and data-oriented and software-intensive systems.

  9. The ALICE Magnetic System Computation.

    CERN Document Server

    Klempt, W; CERN. Geneva; Swoboda, Detlef

    1995-01-01

    In this note we present the first results from the ALICE magnetic system computation performed in the 3-dimensional way with the Vector Fields TOSCA code (version 6.5) [1]. To make the calculations we have used the IBM RISC System 6000-370 and 6000-550 machines combined in the CERN PaRC UNIX cluster.

  10. The Intelligent Safety System: could it introduce complex computing into CANDU shutdown systems

    International Nuclear Information System (INIS)

    Hall, J.A.; Hinds, H.W.; Pensom, C.F.; Barker, C.J.; Jobse, A.H.

    1984-07-01

    The Intelligent Safety System is a computerized shutdown system being developed at the Chalk River Nuclear Laboratories (CRNL) for future CANDU nuclear reactors. It differs from current CANDU shutdown systems in both the algorithm used and the size and complexity of computers required to implement the concept. This paper provides an overview of the project, with emphasis on the computing aspects. Early in the project several needs leading to an introduction of computing complexity were identified, and a computing system that met these needs was conceived. The current work at CRNL centers on building a laboratory demonstration of the Intelligent Safety System, and evaluating the reliability and testability of the concept. Some fundamental problems must still be addressed for the Intelligent Safety System to be acceptable to a CANDU owner and to the regulatory authorities. These are also discussed along with a description of how the Intelligent Safety System might solve these problems

  11. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  12. Present SLAC accelerator computer control system features

    International Nuclear Information System (INIS)

    Davidson, V.; Johnson, R.

    1981-02-01

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  13. The Australian Computational Earth Systems Simulator

    Science.gov (United States)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  14. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    does not depend on fixed therapist staff on particular machines. Results: The overall reported variance rate (all treatments, machines) was < 0.1 % per port or 0.33 % per treatment session. The rate (per machine) depended on automation and plan complexity (see table). Machine M4 (most complex plans and most automation) had the lowest variance rate. The variance rate decreased with increasing automation in spite of increasing plan complexity, while for the manual machines the variance rate increased with complexity. Note that the real variance rates on the two manual machines must be higher than shown here, while (particularly on M4) virtually all random treatment delivery errors were noted by the CCRS system and its QA checks. Treatment delivery times averaged from 14 to 23 minutes per plan, and depended on ports/plan, although this analysis is complicated by other factors. Conclusion: Use of a sophisticated computer-controlled delivery system for routine patient treatments with complex 3-D conformal plans has led to a significant decrease in treatment delivery variances, while at the same time allowing delivery of increasingly complex and sophisticated conformal plans without a significant increase in treatment time. With renewed vigilance for the possibility of systematic problems, it is clear that use of complete and integrated computer-controlled delivery systems can provide significant improvements in treatment delivery, since better plans can be delivered with significantly fewer errors, and without significantly increasing treatment time

  15. TMX-U computer system in evolution

    International Nuclear Information System (INIS)

    Casper, T.A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-01-01

    Over the past three years, the total TMX-U diagnsotic data base has grown to exceed 10 megabytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 minutes per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the paralled acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate

  16. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  17. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    Science.gov (United States)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  18. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  19. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  20. Computer-Aided College Algebra: Learning Components that Students Find Beneficial

    Science.gov (United States)

    Aichele, Douglas B.; Francisco, Cynthia; Utley, Juliana; Wescoatt, Benjamin

    2011-01-01

    A mixed-method study was conducted during the Fall 2008 semester to better understand the experiences of students participating in computer-aided instruction of College Algebra using the software MyMathLab. The learning environment included a computer learning system for the majority of the instruction, a support system via focus groups (weekly…

  1. Migration of the Almaraz NPP integrated operation management system to a new computer platform

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1996-01-01

    In all power plants, it becomes necessary, with the passage of time, to migrate the initial operation management systems to adapt them to current technologies. That is a good time to improve the inclusion of data in the corporative database and standardize the system interfaces and operation, whilst maintaining data system operability. This article contains Almaraz experience in migrating its Integrated Operation Management System to an advanced computer platform based on open systems (UNIX), communications network (ETHERNET) and database (ORACLE). To this effect, clear objectives and strict standards were established to facilitate the work. The most noteworthy results obtained are: Better quality of information and structure in the corporative database Standardised user interface in all applications. Joint migration of applications for Maintenance, Components and Spare parts, Warehouses and Purchases. Integration of new applications into the system. Introduction of the navigator, which allows movement around the database using all available applications. (Author)

  2. DDP-516 Computer Graphics System Capabilities

    Science.gov (United States)

    1972-06-01

    This report describes the capabilities of the DDP-516 Computer Graphics System. One objective of this report is to acquaint DOT management and project planners with the system's current capabilities, applications hardware and software. The Appendix i...

  3. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  4. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Manual Brachytherapy § 35.457 Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning...

  5. Seismic proving test of process computer systems with a seismic floor isolation system

    International Nuclear Information System (INIS)

    Fujimoto, S.; Niwa, H.; Kondo, H.

    1995-01-01

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified

  6. Computer control system of TARN-2

    International Nuclear Information System (INIS)

    Watanabe, S.

    1989-01-01

    The CAMAC interface system is employed in order to regulate the power supply, beam diagnostic and so on. Five CAMAC stations are located in the TARN-2 area and are linked with a serial highway system. The CAMAC serial highway is driven by a serial highway driver, Kinetic 3992, which is housed in the CAMAC powered crate and regulated by two successive methods. One is regulated by the mini computer through the standard branch-highway crate controller, named Type-A2, and the other is regulated with the microcomputer through the auxiliary crate controller. The CAMAC serial highway comprises the two-way optical cables with a total length of 300 m. Each CAMAC station has the serial and auxiliary crate controllers so as to realize alternative control with the local computer system. Interpreter, INSBASIC, is used in the main control computer. There are many kinds of the 'device control function' of the INSBASIC. Because the 'device control function' implies physical operating procedure of such a device, only knowledge of the logical operating procedure is required. A touch panel system is employed to regulate the complicated control flow without any knowledge of the usage of the device. A rotary encoder system, which is analogous to the potentiometer operation, is also available for smooth adjustment of the setting parameter. (author)

  7. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  8. Reactor protection system design using micro-computers

    International Nuclear Information System (INIS)

    Fairbrother, D.B.

    1977-01-01

    Reactor Protection Systems for Nuclear Power Plants have traditionally been built using analog hardware. This hardware works quite well for single parameter trip functions; however, optimum protection against DNBR and KW/ft limits requires more complex trip functions than can easily be handled with analog hardware. For this reason, Babcock and Wilcox has introduced a Reactor Protection System, called the RPS-II, that utilizes a micro-computer to handle the more complex trip functions. This paper describes the design of the RPS-II and the operation of the micro-computer within the Reactor Protection System

  9. Reactor protection system design using micro-computers

    International Nuclear Information System (INIS)

    Fairbrother, D.B.

    1976-01-01

    Reactor protection systems for nuclear power plants have traditionally been built using analog hardware. This hardware works quite well for single parameter trip functions; however, optimum protection against DNBR and KW/ft limits requires more complex trip functions than can easily be handled with analog hardware. For this reason, Babcock and Wilcox has introduced a Reactor Protection System, called the RPS-II, that utilizes a micro-computer to handle the more complex trip functions. The paper describes the design of the RPS-II and the operation of the micro-computer within the Reactor Protection System

  10. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  11. Infrastructure Support for Collaborative Pervasive Computing Systems

    DEFF Research Database (Denmark)

    Vestergaard Mogensen, Martin

    Collaborative Pervasive Computing Systems (CPCS) are currently being deployed to support areas such as clinical work, emergency situations, education, ad-hoc meetings, and other areas involving information sharing and collaboration.These systems allow the users to work together synchronously......, but from different places, by sharing information and coordinating activities. Several researchers have shown the value of such distributed collaborative systems. However, building these systems is by no means a trivial task and introduces a lot of yet unanswered questions. The aforementioned areas......, are all characterized by unstable, volatile environments, either due to the underlying components changing or the nomadic work habits of users. A major challenge, for the creators of collaborative pervasive computing systems, is the construction of infrastructures supporting the system. The complexity...

  12. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  13. Computer aided analysis of prostate histopathology images to support a refined Gleason grading system

    Science.gov (United States)

    Ren, Jian; Sadimin, Evita; Foran, David J.; Qi, Xin

    2017-02-01

    The Gleason grading system used to render prostate cancer diagnosis has recently been updated to allow more accurate grade stratification and higher prognostic discrimination when compared to the traditional grading system. In spite of progress made in trying to standardize the grading process, there still remains approximately a 30% grading discrepancy between the score rendered by general pathologists and those provided by experts while reviewing needle biopsies for Gleason pattern 3 and 4, which accounts for more than 70% of daily prostate tis- sue slides at most institutions. We propose a new computational imaging method for Gleason pattern 3 and 4 classification, which better matches the newly established prostate cancer grading system. The computer- aided analysis method includes two phases. First, the boundary of each glandular region is automatically segmented using a deep convolutional neural network. Second, color, shape and texture features are extracted from superpixels corresponding to the outer and inner glandular regions and are subsequently forwarded to a random forest classifier to give a gradient score between 3 and 4 for each delineated glandular region. The F1 score for glandular segmentation is 0.8460 and the classification accuracy is 0.83+/-0.03.

  14. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  15. The computer aided education and training system for accident management

    International Nuclear Information System (INIS)

    Yoneyama, Mitsuru; Kubota, Ryuji; Fujiwara, Tadashi; Sakuma, Hitoshi

    1999-01-01

    The education and training system for Accident Management was developed by the Japanese BWR group and Hitachi Ltd. The education and training system is composed of two systems. One is computer aided instruction (CAI) education system and the education and training system with computer simulations. Both systems are designed to be executed on personal computers. The outlines of the CAI education system and the education and training system with simulator are reported below. These systems provides plant operators and technical support center staff with the effective education and training for accident management. (author)

  16. Computer automation of a dilution cryogenic system

    International Nuclear Information System (INIS)

    Nogues, C.

    1992-09-01

    This study has been realized in the framework of studies on developing new technic for low temperature detectors for neutrinos and dark matter. The principles of low temperature physics and helium 4 and dilution cryostats, are first reviewed. The cryogenic system used and the technic for low temperature thermometry and regulation systems are then described. The computer automation of the dilution cryogenic system involves: numerical measurement of the parameter set (pressure, temperature, flow rate); computer assisted operating of the cryostat and the pump bench; numerical regulation of pressure and temperature; operation sequence full automation allowing the system to evolve from a state to another (temperature descent for example)

  17. The Evaluation of Computer Systems

    Directory of Open Access Journals (Sweden)

    Cezar Octavian Mihalcescu

    2007-01-01

    Full Text Available Generally, the evaluation of the computersystems is especially interesting at present from severalpoints of view: computer-related, managerial,sociological etc. The reasons for this extended interest arerepresented by the fact that IT becomes increasinglyimportant for reaching the goals of an organization, ingeneral, and the strategic ones in particular. Evaluationmeans the estimation or determination of value, and issynonymous with measuring the value. Evaluating theeconomic value of Computer Systems should be studiedat three levels: individually, at a group level and at anorganization level.

  18. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  19. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  20. Compact, open-architecture computed radiography system

    International Nuclear Information System (INIS)

    Huang, H.K.; Lim, A.; Kangarloo, H.; Eldredge, S.; Loloyan, M.; Chuang, K.S.

    1990-01-01

    Computed radiography (CR) was introduced in 1982, and its basic system design has not changed. Current CR systems have certain limitations: spatial resolution and signal-to-noise ratios are lower than those of screen-film systems, they are complicated and expensive to build, and they have a closed architecture. The authors of this paper designed and implemented a simpler, lower-cost, compact, open-architecture CR system to overcome some of these limitations. The open-architecture system is a manual-load-single-plate reader that can fit on a desk top. Phosphor images are stored in a local disk and can be sent to any other computer through standard interfaces. Any manufacturer's plate can be read with a scanning time of 90 second for a 35 x 43-cm plate. The standard pixel size is 174 μm and can be adjusted for higher spatial resolution. The data resolution is 12 bits/pixel over an x-ray exposure range of 0.01-100 mR

  1. DIII-D tokamak control and neutral beam computer system upgrades

    International Nuclear Information System (INIS)

    Penaflor, B.G.; McHarg, B.B.; Piglowski, D.A.; Pham, D.; Phillips, J.C.

    2004-01-01

    This paper covers recent computer system upgrades made to the DIII-D tokamak control and neutral beam computer systems. The systems responsible for monitoring and controlling the DIII-D tokamak and injecting neutral beam power have recently come online with new computing hardware and software. The new hardware and software have provided a number of significant improvements over the previous Modcomp AEG VME and accessware based systems. These improvements include the incorporation of faster, less expensive, and more readily available computing hardware which have provided performance increases of up to a factor 20 over the prior systems. A more modern graphical user interface with advanced plotting capabilities has improved feedback to users on the operating status of the tokamak and neutral beam systems. The elimination of aging and non supportable hardware and software has increased overall maintainability. The distinguishing characteristics of the new system include: (1) a PC based computer platform running the Redhat version of the Linux operating system; (2) a custom PCI CAMAC software driver developed by general atomics for the kinetic systems 2115 serial highway card; and (3) a custom developed supervisory control and data acquisition (SCADA) software package based on Kylix, an inexpensive interactive development environment (IDE) tool from borland corporation. This paper provides specific details of the upgraded computer systems

  2. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  3. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    International Nuclear Information System (INIS)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above

  4. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  5. Operating System Concepts for Reconfigurable Computing: Review and Survey

    OpenAIRE

    Marcel Eckert; Dominik Meyer; Jan Haase; Bernd Klauer

    2016-01-01

    One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The arti...

  6. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  7. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  8. Improving human reliability through better nuclear power plant system design. Progress report

    International Nuclear Information System (INIS)

    Golay, M.W.

    1995-01-01

    The project on open-quotes Development of a Theory of the Dependence of Human Reliability upon System Designs as a Means of Improving Nuclear Power Plant Performanceclose quotes has been undertaken in order to address the important problem of human error in advanced nuclear power plant designs. Most of the creativity in formulating such concepts has focused upon improving the mechanical reliability of safety related plant systems. However, the lack of a mature theory has retarded similar progress in reducing the likely frequencies of human errors. The main design mechanism used to address this class of concerns has been to reduce or eliminate the human role in plant operations and accident response. The plan of work being pursued in this project is to perform a set of experiments involving human subject who are required to operate, diagnose and respond to changes in computer-simulated systems, relevant to those encountered in nuclear power plants. In the tests the systems are made to differ in complexity in a systematic manner. The computer program used to present the problems to be solved also records the response of the operator as it unfolds. Ultimately this computer is also to be used in compiling the results of the project. The work of this project is focused upon nuclear power plant applications. However, the persuasiveness of human errors in using all sorts of electromechanical machines gives it a much greater potential importance. Because of this we are attempting to pursue our work in a fashion permitting broad generalizations

  9. Towards a Global Monitoring System for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Sciaba, Andrea [CERN

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and drive future software development. This contribution gives a complete overview of the CMS monitoring system and a description of all the recent activities that have been started with the goal of providing a more integrated, modern and functional global monitoring system for computing operations.

  10. Computer-generated holographic near-eye display system based on LCoS phase only modulator

    Science.gov (United States)

    Sun, Peng; Chang, Shengqian; Zhang, Siman; Xie, Ting; Li, Huaye; Liu, Siqi; Wang, Chang; Tao, Xiao; Zheng, Zhenrong

    2017-09-01

    Augmented reality (AR) technology has been applied in various areas, such as large-scale manufacturing, national defense, healthcare, movie and mass media and so on. An important way to realize AR display is using computer-generated hologram (CGH), which is accompanied by low image quality and heavy computing defects. Meanwhile, the diffraction of Liquid Crystal on Silicon (LCoS) has a negative effect on image quality. In this paper, a modified algorithm based on traditional Gerchberg-Saxton (GS) algorithm was proposed to improve the image quality, and new method to establish experimental system was used to broaden field of view (FOV). In the experiment, undesired zero-order diffracted light was eliminated and high definition 2D image was acquired with FOV broadened to 36.1 degree. We have also done some pilot research in 3D reconstruction with tomography algorithm based on Fresnel diffraction. With the same experimental system, experimental results demonstrate the feasibility of 3D reconstruction. These modifications are effective and efficient, and may provide a better solution in AR realization.

  11. Information Technology Service Management with Cloud Computing Approach to Improve Administration System and Online Learning Performance

    Directory of Open Access Journals (Sweden)

    Wilianto Wilianto

    2015-10-01

    Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54  ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440  bits/s.  With  cloud  computing,  the  average  delay  is 45 ms,  load  2 800  bits/s,  CPU  utilization  0.03%,  and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39  ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP    server 100 bits/sec, and HTTP server 95  bits/s.  Thus,  the  delay, the load, and the CPU utilization decrease; but,  the throughput increases. Information technology service management with cloud computing approach has better performance.

  12. Operator support system using computational intelligence techniques

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2015-01-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  13. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  14. [Three dimensional CT reconstruction system on a personal computer].

    Science.gov (United States)

    Watanabe, E; Ide, T; Teramoto, A; Mayanagi, Y

    1991-03-01

    A new computer system to produce three dimensional surface image from CT scan has been invented. Although many similar systems have been already developed and reported, they are too expensive to be set up in routine clinical services because most of these systems are based on high power mini-computer systems. According to the opinion that a practical 3D-CT system should be used in daily clinical activities using only a personal computer, we have transplanted the 3D program into a personal computer working in MS-DOS (16-bit, 12 MHz). We added to the program a routine which simulates surgical dissection on the surface image. The time required to produce the surface image ranges from 40 to 90 seconds. To facilitate the simulation, we connected a 3D system with the neuronavigator. The navigator gives the position of the surgical simulation when the surgeon places the navigator tip on the patient's head thus simulating the surgical excision before the real dissection.

  15. A computer control system for a research reactor

    International Nuclear Information System (INIS)

    Crawford, K.C.; Sandquist, G.M.

    1987-01-01

    Most reactor applications until now, have not required computer control of core output. Commercial reactors are generally operated at a constant power output to provide baseline power. However, if commercial reactor cores are to become load following over a wide range, then centralized digital computer control is required to make the entire facility respond as a single unit to continual changes in power demand. Navy and research reactors are much smaller and simpler and are operated at constant power levels as required, without concern for the number of operators required to operate the facility. For navy reactors, centralized digital computer control may provide space savings and reduced personnel requirements. Computer control offers research reactors versatility to efficiently change a system to develop new ideas. The operation of any reactor facility would be enhanced by a controller that does not panic and is continually monitoring all facility parameters. Eventually very sophisticated computer control systems may be developed which will sense operational problems, diagnose the problem, and depending on the severity of the problem, immediately activate safety systems or consult with operators before taking action

  16. Embedded computer systems for control applications in EBR-II

    International Nuclear Information System (INIS)

    Carlson, R.B.; Start, S.E.

    1993-01-01

    The purpose of this paper is to describe the embedded computer systems approach taken at Experimental Breeder Reactor II (EBR-II) for non-safety related systems. The hardware and software structures for typical embedded systems are presented The embedded systems development process is described. Three examples are given which illustrate typical embedded computer applications in EBR-II

  17. Functional safeguards for computers for protection systems for Savannah River reactors

    International Nuclear Information System (INIS)

    Kritz, W.R.

    1977-06-01

    Reactors at the Savannah River Plant have recently been equipped with a ''safety computer'' system. This system utilizes dual digital computers in a primary protection system that monitors individual fuel assembly coolant flow and temperature. The design basis for the (SRP safety) computer systems allowed for eventual failure of any input sensor or any computer component. These systems are routinely used by reactor operators with a minimum of training in computer technology. The hardware configuration and software design therefore contain safeguards so that both hardware and human failures do not cause significant loss of reactor protection. The performance of the system to date is described

  18. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    Science.gov (United States)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  19. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  20. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  1. Computer network for electric power control systems. Chubu denryoku (kabu) denryoku keito seigyoyo computer network

    Energy Technology Data Exchange (ETDEWEB)

    Tsuneizumi, T. (Chubu Electric Power Co. Inc., Nagoya (Japan)); Shimomura, S.; Miyamura, N. (Fuji Electric Co. Ltd., Tokyo (Japan))

    1992-06-03

    A computer network for electric power control system was developed that is applied with the open systems interconnection (OSI), an international standard for communications protocol. In structuring the OSI network, a direct session layer was accessed from the operation functions when high-speed small-capacity information is transmitted. File transfer, access and control having a function of collectively transferring large-capacity data were applied when low-speed large-capacity information is transmitted. A verification test for the realtime computer network (RCN) mounting regulation was conducted according to a verification model using a mini-computer, and a result that can satisfy practical performance was obtained. For application interface, kernel, health check and two-route transmission functions were provided as a connection control function, so were transmission verification function and late arrival abolishing function. In system mounting pattern, dualized communication server (CS) structure was adopted. A hardware structure may include a system to have the CS function contained in a host computer and a separate installation system. 5 figs., 6 tabs.

  2. Computer-controlled radiation monitoring system

    International Nuclear Information System (INIS)

    Homann, S.G.

    1994-01-01

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory's Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable

  3. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  4. Handbook for the Computer Security Certification of Trusted Systems

    National Research Council Canada - National Science Library

    Weissman, Clark

    1995-01-01

    Penetration testing is required for National Computer Security Center (NCSC) security evaluations of systems and products for the B2, B3, and A1 class ratings of the Trusted Computer System Evaluation Criteria (TCSEC...

  5. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  6. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  7. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  8. Computer-aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1997-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP)

  9. Ubiquitous computing to support co-located clinical teams: using the semiotics of physical objects in system design.

    Science.gov (United States)

    Bang, Magnus; Timpka, Toomas

    2007-06-01

    Co-located teams often use material objects to communicate messages in collaboration. Modern desktop computing systems with abstract graphical user interface (GUIs) fail to support this material dimension of inter-personal communication. The aim of this study is to investigate how tangible user interfaces can be used in computer systems to better support collaborative routines among co-located clinical teams. The semiotics of physical objects used in team collaboration was analyzed from data collected during 1 month of observations at an emergency room. The resulting set of communication patterns was used as a framework when designing an experimental system. Following the principles of augmented reality, physical objects were mapped into a physical user interface with the goal of maintaining the symbolic value of those objects. NOSTOS is an experimental ubiquitous computing environment that takes advantage of interaction devices integrated into the traditional clinical environment, including digital pens, walk-up displays, and a digital desk. The design uses familiar workplace tools to function as user interfaces to the computer in order to exploit established cognitive and collaborative routines. Paper-based tangible user interfaces and digital desks are promising technologies for co-located clinical teams. A key issue that needs to be solved before employing such solutions in practice is associated with limited feedback from the passive paper interfaces.

  10. Computer-Aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1996-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility

  11. Modeling ground-based timber harvesting systems using computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2001-01-01

    Modeling ground-based timber harvesting systems with an object-oriented methodology was investigated. Object-oriented modeling and design promote a better understanding of requirements, cleaner designs, and better maintainability of the harvesting simulation system. The model developed simulates chainsaw felling, drive-to-tree feller-buncher, swing-to-tree single-grip...

  12. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  13. The models of the life cycle of a computer system

    Directory of Open Access Journals (Sweden)

    Sorina-Carmen Luca

    2006-01-01

    Full Text Available The paper presents a comparative study on the patterns of the life cycle of a computer system. There are analyzed the advantages of each pattern and presented the graphic schemes that point out each stage and step in the evolution of a computer system. In the end the classifications of the methods of projecting the computer systems are discussed.

  14. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  15. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  16. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  17. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  18. Computer modeling of properties of complex molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  19. Computational Approach to Profit Optimization of a Loss-Queueing System

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar Yadav

    2010-01-01

    Full Text Available Objective of the paper is to deal with the profit optimization of a loss queueing system with the finite capacity. Here, we define and compute total expected cost (TEC, total expected revenue (TER and consequently we compute the total optimal profit (TOP of the system. In order to compute the total optimal profit of the system, a computing algorithm has been developed and a fast converging N-R method has been employed which requires least computing time and lesser memory space as compared to other methods. Sensitivity analysis and its observations based on graphics have added a significant value to this model.

  20. Specialized computer system to diagnose critical lined equipment

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Morozova, O. A.; Nedelkin, A. A.

    2018-05-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors propose and describe the structure of the specialized computer system to diagnose critical lined equipment. The relative results of diagnosing lining condition by the basic system and the proposed specialized computer system are presented. To automate evaluation of lining condition and support in making decisions regarding the operation mode of the lined equipment, the specialized software has been developed.

  1. Expansion of the TFTR neutral beam computer system

    International Nuclear Information System (INIS)

    McEnerney, J.; Chu, J.; Davis, S.; Fitzwater, J.; Fleming, G.; Funk, P.; Hirsch, J.; Lagin, L.; Locasak, V.; Randerson, L.; Schechtman, N.; Silber, K.; Skelly, G.; Stark, W.

    1992-01-01

    Previous TFTR Neutral Beam computing support was based primarily on an Encore Concept 32/8750 computer within the TFTR Central Instrumentation, Control and Data Acquisition System (CICADA). The resources of this machine were 90% utilized during a 2.5 minute duty cycle. Both interactive and automatic processes were supported, with interactive response suffering at lower priority. Further, there were additional computing requirements and no cost effective path for expansion within the Encore framework. Two elements provided a solution to these problems: improved price performance for computing and a high speed bus link to the SELBUS. The purchase of a Sun SPARCstation and a VME/SELBUS bus link, allowed offloading the automatic processing to the workstation. This paper describes the details of the system including the performance of the bus link and Sun SPARCstation, raw data acquisition and data server functions, application software conversion issues, and experiences with the UNIX operating system in the mixed platform environment

  2. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  3. Computer-aided control system design

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.

    1986-01-01

    Control systems are typically implemented using conventional PID controllers, which are then tuned manually during plant commissioning to compensate for interactions between feedback loops. As plants increase in size and complexity, such controllers can fail to provide adequate process regulations. Multivariable methods can be utilized to overcome these limitations. At the Chalk River Nuclear Laboratories, modern control systems are designed and analyzed with the aid of MVPACK, a system of computer programs that appears to the user like a high-level calculator. The software package solves complicated control problems, and provides useful insight into the dynamic response and stability of multivariable systems

  4. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  5. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  6. Introducing remarks upon the analysis of computer systems performance

    International Nuclear Information System (INIS)

    Baum, D.

    1980-05-01

    Some of the basis ideas of analytical techniques to study the behaviour of computer systems are presented. Single systems as well as networks of computers are viewed as stochastic dynamical systems which may be modelled by queueing networks. Therefore this report primarily serves as an introduction to probabilistic methods for qualitative analysis of systems. It is supplemented by an application example of Chandy's collapsing method. (orig.) [de

  7. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  8. Life system modeling and intelligent computing. Pt. II. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part II of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 56 papers in this volume are organized in topical sections on advanced evolutionary computing theory and algorithms; advanced neural network and fuzzy system theory and algorithms; modeling and simulation of societies and collective behavior; biomedical signal processing, imaging, and visualization; intelligent computing and control in distributed power generation systems; intelligent methods in power and energy infrastructure development; intelligent modeling, monitoring, and control of complex nonlinear systems. (orig.)

  9. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    . While such growth is positive, the newest generation of ubicomp practitioners and researchers, isolated to specific tasks, are in danger of losing their sense of history and the broader perspective that has been so essential to the field’s creativity and brilliance. Under the guidance of John Krumm...... applications Privacy protection in systems that connect personal devices and personal information Moving from the graphical to the ubiquitous computing user interface Techniques that are revolutionizing the way we determine a person’s location and understand other sensor measurements While we needn’t become...

  10. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  11. Software For Computer-Aided Design Of Control Systems

    Science.gov (United States)

    Wette, Matthew

    1994-01-01

    Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.

  12. The use of digital computers in CANDU shutdown systems

    International Nuclear Information System (INIS)

    Gilbert, R.S.; Komorowski, C.W.

    1986-01-01

    This paper summarizes the application of computers in CANDU shutdown systems. A general description of systems that are already in service is presented along with a description of a fully computerized shutdown system which is scheduled to enter service in 1987. In reviewing the use of computers in the shutdown systems there are three functional areas where computers have been or are being applied. These are (i) shutdown system monitoring, (ii) parameter display and testing and (iii) shutdown initiation. In recent years various factors (References 1 and 2) have influenced the development and deployment of systems which have addressed two of these functions. At the present time a system is also being designed which addresses all of these areas in a comprehensive manner. This fully computerized shutdown system reflects the previous design, and licensing experience which was gained in earlier applications. Prior to describing the specific systems which have been designed a short summary of CANDU shutdown system characteristics is presented

  13. Life system modeling and intelligent computing. Pt. I. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part I of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 55 papers in this volume are organized in topical sections on intelligent modeling, monitoring, and control of complex nonlinear systems; autonomy-oriented computing and intelligent agents; advanced theory and methodology in fuzzy systems and soft computing; computational intelligence in utilization of clean and renewable energy resources; intelligent modeling, control and supervision for energy saving and pollution reduction; intelligent methods in developing vehicles, engines and equipments; computational methods and intelligence in modeling genetic and biochemical networks and regulation. (orig.)

  14. Selection and implementation of a laboratory computer system.

    Science.gov (United States)

    Moritz, V A; McMaster, R; Dillon, T; Mayall, B

    1995-07-01

    The process of selection of a pathology computer system has become increasingly complex as there are an increasing number of facilities that must be provided and stringent performance requirements under heavy computing loads from both human users and machine inputs. Furthermore, the continuing advances in software and hardware technology provide more options and innovative new ways of tackling problems. These factors taken together pose a difficult and complex set of decisions and choices for the system analyst and designer. The selection process followed by the Microbiology Department at Heidelberg Repatriation Hospital included examination of existing systems, development of a functional specification followed by a formal tender process. The successful tenderer was then selected using predefined evaluation criteria. The successful tenderer was a software development company that developed and supplied a system based on a distributed network using a SUN computer as the main processor. The software was written using Informix running on the UNIX operating system. This represents one of the first microbiology systems developed using a commercial relational database and fourth generation language. The advantages of this approach are discussed.

  15. A Nuclear Safety System based on Industrial Computer

    International Nuclear Information System (INIS)

    Kim, Ji Hyeon; Oh, Do Young; Lee, Nam Hoon; Kim, Chang Ho; Kim, Jae Hack

    2011-01-01

    The Plant Protection System(PPS), a nuclear safety Instrumentation and Control (I and C) system for Nuclear Power Plants(NPPs), generates reactor trip on abnormal reactor condition. The Core Protection Calculator System (CPCS) is a safety system that generates and transmits the channel trip signal to the PPS on an abnormal condition. Currently, these systems are designed on the Programmable Logic Controller(PLC) based system and it is necessary to consider a new system platform to adapt simpler system configuration and improved software development process. The CPCS was the first implementation using a micro computer in a nuclear power plant safety protection system in 1980 which have been deployed in Ulchin units 3,4,5,6 and Younggwang units 3,4,5,6. The CPCS software was developed in the Concurrent Micro5 minicomputer using assembly language and embedded into the Concurrent 3205 computer. Following the micro computer based CPCS, PLC based Common-Q platform has been used for the ShinKori/ShinWolsong units 1,2 PPS and CPCS, and the POSAFE-Q PLC platform is used for the ShinUlchin units 1,2 PPS and CPCS. In developing the next generation safety system platform, several factors (e.g., hardware/software reliability, flexibility, licensibility and industrial support) can be considered. This paper suggests an Industrial Computer(IC) based protection system that can be developed with improved flexibility without losing system reliability. The IC based system has the advantage of a simple system configuration with optimized processor boards because of improved processor performance and unlimited interoperability between the target system and development system that use commercial CASE tools. This paper presents the background to selecting the IC based system with a case study design of the CPCS. Eventually, this kind of platform can be used for nuclear power plant safety systems like the PPS, CPCS, Qualified Indication and Alarm . Pami(QIAS-P), and Engineering Safety

  16. A Nuclear Safety System based on Industrial Computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Oh, Do Young; Lee, Nam Hoon; Kim, Chang Ho; Kim, Jae Hack [Korea Electric Power Corporation Engineering and Construction, Daejeon (Korea, Republic of)

    2011-05-15

    The Plant Protection System(PPS), a nuclear safety Instrumentation and Control (I and C) system for Nuclear Power Plants(NPPs), generates reactor trip on abnormal reactor condition. The Core Protection Calculator System (CPCS) is a safety system that generates and transmits the channel trip signal to the PPS on an abnormal condition. Currently, these systems are designed on the Programmable Logic Controller(PLC) based system and it is necessary to consider a new system platform to adapt simpler system configuration and improved software development process. The CPCS was the first implementation using a micro computer in a nuclear power plant safety protection system in 1980 which have been deployed in Ulchin units 3,4,5,6 and Younggwang units 3,4,5,6. The CPCS software was developed in the Concurrent Micro5 minicomputer using assembly language and embedded into the Concurrent 3205 computer. Following the micro computer based CPCS, PLC based Common-Q platform has been used for the ShinKori/ShinWolsong units 1,2 PPS and CPCS, and the POSAFE-Q PLC platform is used for the ShinUlchin units 1,2 PPS and CPCS. In developing the next generation safety system platform, several factors (e.g., hardware/software reliability, flexibility, licensibility and industrial support) can be considered. This paper suggests an Industrial Computer(IC) based protection system that can be developed with improved flexibility without losing system reliability. The IC based system has the advantage of a simple system configuration with optimized processor boards because of improved processor performance and unlimited interoperability between the target system and development system that use commercial CASE tools. This paper presents the background to selecting the IC based system with a case study design of the CPCS. Eventually, this kind of platform can be used for nuclear power plant safety systems like the PPS, CPCS, Qualified Indication and Alarm . Pami(QIAS-P), and Engineering Safety

  17. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  18. Experimental comparison of two quantum computing architectures.

    Science.gov (United States)

    Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher

    2017-03-28

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.

  19. Computer Application Systems at the University.

    Science.gov (United States)

    Bazewicz, Mieczyslaw

    1979-01-01

    The results of the WASC Project at the Technical University of Wroclaw have confirmed the possibility of constructing informatic systems based on the recognized size and specifics of user's needs (needs of the university) and provided some solutions to the problem of collaboration of computer systems at remote universities. (Author/CMV)

  20. Case Studies in Library Computer Systems.

    Science.gov (United States)

    Palmer, Richard Phillips

    Twenty descriptive case studies of computer applications in a variety of libraries are presented in this book. Computerized circulation, serial and acquisition systems in public, high school, college, university and business libraries are included. Each of the studies discusses: 1) the environment in which the system operates, 2) the objectives of…

  1. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  2. ON-BOARD COMPUTER SYSTEM FOR KITSAT-1 AND 2

    Directory of Open Access Journals (Sweden)

    H. S. Kim

    1996-06-01

    Full Text Available KITSAT-1 and 2 are microsatellites weighting 50kg and all the on-board data are processed by the on-board computer system. Hence, these on-board computers require to be highly reliable and be designed with tight power consumption, mass and size constraints. On-board computer(OBC systems for KITSAT-1 and 2 are also designed with a simple flexible hardware for reliability and software takes more responsibility than hardware. KITSAT-1 and 2 on-board computer system consist of OBC 186 as the primary OBC and OBC80 as its backup. OBC186 runs spacecraft operating system (SCOS which has real-time multi-tasking capability. Since their launch, OBC186 and OBC80 have been operating successfully until today. In this paper, we describe the development of OBC186 hardware and software and analyze its in-orbit operation performance.

  3. Towards a global monitoring system for CMS computing operations

    CERN Multimedia

    CERN. Geneva; Bauerdick, Lothar A.T.

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and ...

  4. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  5. Evolution of the Darlington NGS fuel handling computer systems

    International Nuclear Information System (INIS)

    Leung, V.; Crouse, B.

    1996-01-01

    The ability to improve the capabilities and reliability of digital control systems in nuclear power stations to meet changing plant and personnel requirements is a formidable challenge. Many of these systems have high quality assurance standards that must be met to ensure adequate nuclear safety. Also many of these systems contain obsolete hardware along with software that is not easily transported to newer technology computer equipment. Combining modern technology upgrades into a system of obsolete hardware components is not an easy task. Lastly, as users become more accustomed to using modern technology computer systems in other areas of the station (e.g. information systems), their expectations of the capabilities of the plant systems increase. This paper will present three areas of the Darlington NGS fuel handling computer system that have been or are in the process of being upgraded to current technology components within the framework of an existing fuel handling control system. (author). 3 figs

  6. Evolution of the Darlington NGS fuel handling computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Leung, V; Crouse, B [Ontario Hydro, Bowmanville (Canada). Darlington Nuclear Generating Station

    1997-12-31

    The ability to improve the capabilities and reliability of digital control systems in nuclear power stations to meet changing plant and personnel requirements is a formidable challenge. Many of these systems have high quality assurance standards that must be met to ensure adequate nuclear safety. Also many of these systems contain obsolete hardware along with software that is not easily transported to newer technology computer equipment. Combining modern technology upgrades into a system of obsolete hardware components is not an easy task. Lastly, as users become more accustomed to using modern technology computer systems in other areas of the station (e.g. information systems), their expectations of the capabilities of the plant systems increase. This paper will present three areas of the Darlington NGS fuel handling computer system that have been or are in the process of being upgraded to current technology components within the framework of an existing fuel handling control system. (author). 3 figs.

  7. Risk assessment of computer-controlled safety systems for fusion reactors

    International Nuclear Information System (INIS)

    Fryer, M.O.; Bruske, S.Z.

    1983-01-01

    The complexity of fusion reactor systems and the need to display, analyze, and react promptly to large amounts of information during reactor operation will require a number of safety systems in the fusion facilities to be computer controlled. Computer software, therefore, must be included in the reactor safety analyses. Unfortunately, the science of integrating computer software into safety analyses is in its infancy. Combined plant hardware and computer software systems are often treated by making simple assumptions about software performance. This method is not acceptable for assessing risks in the complex fusion systems, and a new technique for risk assessment of combined plant hardware and computer software systems has been developed. This technique is an extension of the traditional fault tree analysis and uses structured flow charts of the software in a manner analogous to wiring or piping diagrams of hardware. The software logic determines the form of much of the fault trees

  8. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  9. Functional requirements for gas characterization system computer software

    International Nuclear Information System (INIS)

    Tate, D.D.

    1996-01-01

    This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel

  10. Computer systems in the operation, maintenance and technical support of Loviisa NPS

    International Nuclear Information System (INIS)

    Tiitinen, M.

    1993-01-01

    A description is given of how the Loviisa nuclear power plant has utilized computers in many ways in the operation, maintenance, technical support and in other functions at the plant. The evolution of the computer system can be divided into the following phases: plant commissioning (1975-80), maintenance systems development (1981-84), second generation systems take-over (1985-90) and workstation client/server systems and PC's proliferation (1991->). A short description is given of the main systems at the Loviisa plant using computers, i.e. process computer systems, the plant information system, training simulator, vibration monitoring, laboratory computer systems, PC and workstation applications. (Z.S.) 4 refs

  11. Computer-Based Wireless Advertising Communication System

    Directory of Open Access Journals (Sweden)

    Anwar Al-Mofleh

    2009-10-01

    Full Text Available In this paper we developed a computer based wireless advertising communication system (CBWACS that enables the user to advertise whatever he wants from his own office to the screen in front of the customer via wireless communication system. This system consists of two PIC microcontrollers, transmitter, receiver, LCD, serial cable and antenna. The main advantages of the system are: the wireless structure and the system is less susceptible to noise and other interferences because it uses digital communication techniques.

  12. Relationship between quality of care and choice of clinical computing system: retrospective analysis of family practice performance under the UK's quality and outcomes framework.

    Science.gov (United States)

    Kontopantelis, Evangelos; Buchan, Iain; Reeves, David; Checkland, Kath; Doran, Tim

    2013-08-02

    To investigate the relationship between performance on the UK Quality and Outcomes Framework pay-for-performance scheme and choice of clinical computer system. Retrospective longitudinal study. Data for 2007-2008 to 2010-2011, extracted from the clinical computer systems of general practices in England. All English practices participating in the pay-for-performance scheme: average 8257 each year, covering over 99% of the English population registered with a general practice. Levels of achievement on 62 quality-of-care indicators, measured as: reported achievement (levels of care after excluding inappropriate patients); population achievement (levels of care for all patients with the relevant condition) and percentage of available quality points attained. Multilevel mixed effects multiple linear regression models were used to identify population, practice and clinical computing system predictors of achievement. Seven clinical computer systems were consistently active in the study period, collectively holding approximately 99% of the market share. Of all population and practice characteristics assessed, choice of clinical computing system was the strongest predictor of performance across all three outcome measures. Differences between systems were greatest for intermediate outcomes indicators (eg, control of cholesterol levels). Under the UK's pay-for-performance scheme, differences in practice performance were associated with the choice of clinical computing system. This raises the question of whether particular system characteristics facilitate higher quality of care, better data recording or both. Inconsistencies across systems need to be understood and addressed, and researchers need to be cautious when generalising findings from samples of providers using a single computing system.

  13. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  14. Overview of the DIII-D program computer systems

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1997-11-01

    Computer systems pervade every aspect of the DIII-D National Fusion Research program. This includes real-time systems acquiring experimental data from data acquisition hardware; cpu server systems performing short term and long term data analysis; desktop activities such as word processing, spreadsheets, and scientific paper publication; and systems providing mechanisms for remote collaboration. The DIII-D network ties all of these systems together and connects to the ESNET wide area network. This paper will give an overview of these systems, including their purposes and functionality and how they connect to other systems. Computer systems include seven different types of UNIX systems (HP-UX, REALIX, SunOS, Solaris, Digital UNIX, Ultrix, and IRIX), OpenVMS systems (both BAX and Alpha), MACintosh, Windows 95, and more recently Windows NT systems. Most of the network internally is ethernet with some use of FDDI. A T3 link connects to ESNET and thus to the Internet. Recent upgrades to the network have notably improved its efficiency, but the demand for bandwidth is ever increasing. By means of software and mechanisms still in development, computer systems at remote sites are playing an increasing role both in accessing and analyzing data and even participating in certain controlling aspects for the experiment. The advent of audio/video over the interest is now presenting a new means for remote sites to participate in the DIII-D program

  15. Engaging Parents through Better Communication Systems

    Science.gov (United States)

    Kraft, Matthew A.

    2017-01-01

    Matthew A. Kraft, an assistant professor of education and economics at Brown University, highlights new research showing that frequent, personalized outreach to parents can boost parent engagement and student achievement. He offers tips on how schools can create infrastructures, including digital technology tools, to better support such…

  16. Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering

    CERN Document Server

    Elleithy, Khaled

    2013-01-01

    Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning. This book includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2010). The proceedings are a set of rigorously reviewed world-class manuscripts presenting the state of international practice in Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications.

  17. Computational simulation of concurrent engineering for aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  18. Computational simulation for concurrent engineering of aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  19. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  20. Application node system image manager subsystem within a distributed function laboratory computer system

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Beck, R.D.

    1978-10-01

    A computer system to control and acquire data from one x-ray diffraction, five neutron scattering, and four neutron diffraction experiments located at the Brookhaven National Laboratory High Flux Beam Reactor has operated in a routine manner for over three years. The computer system is configured as a network of computer processors with the processor interconnections assuming a star-like structure. At the points of the star are the ten experiment control-data acquisition computers, referred to as application nodes. At the center of the star is a shared service node which supplies a set of shared services utilized by all of the application nodes. A program development node occupies one additional point of the star. The design and implementation of a network subsystem to support development and execution of operating systems for the application nodes is described. 6 figures, 1 table

  1. Military clouds: utilization of cloud computing systems at the battlefield

    Science.gov (United States)

    Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai

    2012-05-01

    Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.

  2. A computer-controlled conformal radiotherapy system. IV: Electronic chart

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; McShan, Daniel L.; Matrone, Gwynne M.; Weaver, Tamar A.; Lewis, James D.; Kessler, Marc L.

    1995-01-01

    Purpose: The design and implementation of a system for electronically tracking relevant plan, prescription, and treatment data for computer-controlled conformal radiation therapy is described. Methods and Materials: The electronic charting system is implemented on a computer cluster coupled by high-speed networks to computer-controlled therapy machines. A methodical approach to the specification and design of an integrated solution has been used in developing the system. The electronic chart system is designed to allow identification and access of patient-specific data including treatment-planning data, treatment prescription information, and charting of doses. An in-house developed database system is used to provide an integrated approach to the database requirements of the design. A hierarchy of databases is used for both centralization and distribution of the treatment data for specific treatment machines. Results: The basic electronic database system has been implemented and has been in use since July 1993. The system has been used to download and manage treatment data on all patients treated on our first fully computer-controlled treatment machine. To date, electronic dose charting functions have not been fully implemented clinically, requiring the continued use of paper charting for dose tracking. Conclusions: The routine clinical application of complex computer-controlled conformal treatment procedures requires the management of large quantities of information for describing and tracking treatments. An integrated and comprehensive approach to this problem has led to a full electronic chart for conformal radiation therapy treatments

  3. Reactor safety: the Nova computer system

    International Nuclear Information System (INIS)

    Eisgruber, H.; Stadelmann, W.

    1991-01-01

    After instances of maloperation, the causes of defects, the effectiveness of the measures taken to control the situation, and possibilities to avoid future recurrences need to be investigated above all before the plant is restarted. The most important aspect in all these efforts is to check the sequence in time, and the completeness, of the control measures initiated automatically. For this verification, a computer system is used instead of time-consuming manual analytical techniques, which produces the necessary information almost in real time. The results are available within minutes after completion of the measures initiated automatically. As all short-term safety functions are initiated by automatic systems, their consistent and comprehensive verification results in a clearly higher level of safety. The report covers the development of the computer system, and its implementation, in the Gundremmingen nuclear power station. Similar plans are being pursued in Biblis and Muelheim-Kaerlich. (orig.) [de

  4. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  5. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  6. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  7. Cardiology office computer use: primer, pointers, pitfalls.

    Science.gov (United States)

    Shepard, R B; Blum, R I

    1986-10-01

    An office computer is a utility, like an automobile, with benefits and costs that are both direct and hidden and potential for disaster. For the cardiologist or cardiovascular surgeon, the increasing power and decreasing costs of computer hardware and the availability of software make use of an office computer system an increasingly attractive possibility. Management of office business functions is common; handling and scientific analysis of practice medical information are less common. The cardiologist can also access national medical information systems for literature searches and for interactive further education. Selection and testing of programs and the entire computer system before purchase of computer hardware will reduce the chances of disappointment or serious problems. Personnel pretraining and planning for office information flow and medical information security are necessary. Some cardiologists design their own office systems, buy hardware and software as needed, write programs for themselves and carry out the implementation themselves. For most cardiologists, the better course will be to take advantage of the professional experience of expert advisors. This article provides a starting point from which the practicing cardiologist can approach considering, specifying or implementing an office computer system for business functions and for scientific analysis of practice results.

  8. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    communication system, which allows for computerized support. ... Keywords: Distributed computing system; Petri nets;Workflow management. 1. ... A distributed operating system usually .... the questionnaire is returned with invalid data,.

  9. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1996-09-27

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This document outlines the negotiated requirements as agreed to by GTE Northwest during technical contract discussions. This system defines a commercial off-the-shelf computer dispatching system providing both test and graphic display information while interfacing with diverse alarm reporting system within the Hanford Site. This system provided expansion capability to integrate Hanford Fire and the Occurrence Notification Center. The system also provided back-up capability for the Plutonium Processing Facility (PFP).

  10. Computer-aided instruction system; Systeme d'enseignement programme par ordinateur

    Energy Technology Data Exchange (ETDEWEB)

    Teneze, Jean Claude

    1968-12-18

    This research thesis addresses the use of teleprocessing and time sharing by the RAX IBM system and the possibility to introduce a dialog with the machine to develop an application in which the computer plays the role of a teacher for different pupils at the same time. Two operating modes are thus exploited: a teacher-mode and a pupil-mode. The developed CAI (computer-aided instruction) system comprises a checker to check the course syntax in teacher-mode, a translator to trans-code the course written in teacher-mode into a form which can be processes by the execution programme, and the execution programme which presents the course in pupil-mode.

  11. Primary Health Care Software-A Computer Based Data Management System

    Directory of Open Access Journals (Sweden)

    Tuli K

    1990-01-01

    Full Text Available Realising the duplication and time consumption in the usual manual system of data collection necessitated experimentation with computer based management system for primary health care in the primary health centers. The details of the population as available in the existing manual system were used for computerizing the data. Software was designed for data entry and analysis. It was written in Dbase III plus language. It was so designed that a person with no knowledge about computer could use it, A cost analysis was done and the computer system was found more cost effective than the usual manual system.

  12. Introduction to the special section: Designing a better user experience for self-service systems

    NARCIS (Netherlands)

    van der Geest, Thea; Ramey, J.; Rosenbaum, S.; van Velsen, Lex Stefan

    2013-01-01

    June 2013 issue of IEEE Transactions on Professional Communication features a special section on 'Designing a Better User Experience for Self-Service Systems'. Self-service systems offers the users the benefit of 24/7 access to an ever-growing range of services and perhaps also a strong sense of

  13. Quality study of portal images acquired by computed radiography and screen-film system under megavoltage ray

    International Nuclear Information System (INIS)

    Cao Guoquan; Jin Xiance; Wu Shixiu; Xie Congying; Zhang Li; Yu Jianyi; Li Yueqing

    2007-01-01

    Objective: To evaluate the quality of the portal images acquired by computed radiography (CR) system and conventional screen-film system, respectively. Methods: Imaging plates (IP) and X-ray films ora home-devised lead phantom with a leakage of 6.45% were acquired, and modulation transfer function (MTF) curves of the both images were measured using edge method. Portal images of 40 nasopharyngeal cancer patients were acquired by IP and screen-film system respectively. Two doctors with similar experience evaluated the damage degree of petrosal bone, the receiver operating characteristic (ROC) curve of CR images and general images were drawn according to two doctors evaluation results. Results: The identification frequency of CR system and screen-film system were 1.159 and 0.806 Lp/mm respectively. For doctor one, the area under ROC curve of CR images and general images were 0.802 and 0.742 respectively. For doctor two, the area under ROC curve of CR images and general images were 0.751 and 0.600 respectively. The MTF curve and ROC curve of CR are both better than those of screen-film system. Conclusion: The image quality of CR portal imaging is much better than that of screen-film system. The utility of CR in linear accelerator for portal imaging is promising in clinic. (authors)

  14. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  15. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    Science.gov (United States)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  16. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    Science.gov (United States)

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  17. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    International Nuclear Information System (INIS)

    Ramakrishna, G.S.; Umesh Kumar; Datta, S.S.; Rao, S.M.

    1996-01-01

    Computed tomographic imaging is an advanced technique for nondestructive testing (NDT) and examination. For the first time in India a computed aided tomography system has been indigenously developed in BARC for testing industrial components and was successfully demonstrated. The system in addition to Computed Tomography (CT) can also perform Digital Radiography (DR) to serve as a powerful tool for NDT applications. It has wider applications in the fields of nuclear, space and allied fields. The authors have developed a computed industrial tomographic imaging system with Cesium 137 gamma radiation source for nondestructive examination of engineering and industrial specimens. This presentation highlights the design and development of a prototype system and its software for image reconstruction, simulation and display. The paper also describes results obtained with several tests specimens, current development and possibility of using neutrons as well as high energy x-rays in computed tomography. (author)

  18. Statistical properties of dynamical systems – Simulation and abstract computation

    International Nuclear Information System (INIS)

    Galatolo, Stefano; Hoyrup, Mathieu; Rojas, Cristóbal

    2012-01-01

    Highlights: ► A survey on results about computation and computability on the statistical properties of dynamical systems. ► Computability and non-computability results for invariant measures. ► A short proof for the computability of the convergence speed of ergodic averages. ► A kind of “constructive” version of the pointwise ergodic theorem. - Abstract: We survey an area of recent development, relating dynamics to theoretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical systems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approximation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible. We also explain how it is possible to compute the speed of convergence of ergodic averages (when the system is known exactly) and how this entails the computation of arbitrarily good approximations of points of the space having typical statistical behaviour (a sort of constructive version of the pointwise ergodic theorem).

  19. Industrial Personal Computer based Display for Nuclear Safety System

    International Nuclear Information System (INIS)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min

    2014-01-01

    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view

  20. Industrial Personal Computer based Display for Nuclear Safety System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min [KEPCO, Youngin (Korea, Republic of)

    2014-08-15

    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view.

  1. Cloud Computing Techniques for Space Mission Design

    Science.gov (United States)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  2. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  3. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  4. Computer control system for sup 6 sup 0 Co industrial DR nondestructive testing system

    CERN Document Server

    Chen Hai Jun

    2002-01-01

    The author presents the application of sup 6 sup 0 Co industrial DR nondestructive testing system, which including the control of step-motor, electrical protection, computer monitor program. The computer control system has good performance, high reliability and cheap expense

  5. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  6. Computer Security: Competing Concepts

    OpenAIRE

    Nissenbaum, Helen; Friedman, Batya; Felten, Edward

    2001-01-01

    This paper focuses on a tension we discovered in the philosophical part of our multidisciplinary project on values in web-browser security. Our project draws on the methods and perspectives of empirical social science, computer science, and philosophy to identify values embodied in existing web-browser security and also to prescribe changes to existing systems (in particular, Mozilla) so that values relevant to web-browser systems are better served than presently they are. The tension, which ...

  7. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  8. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  9. Fail-safe design criteria for computer-based reactor protection systems

    International Nuclear Information System (INIS)

    Keats, A.B.

    1980-01-01

    The increasing quantity and complexity of the instrumentation required in nuclear power plants provides a strong incentive for using on-line computers as the basis of the control and protection systems. On-line computers using multiplexed sampled data are already well established but their application to nuclear reactor protection systems requires special measures to satisfy the very high reliability which is demanded in the interests of safety and availability. Some existing codes of practice relating to segregation of replicated subsysttems continue to be applicable and lead to division of the computer functions into two distinct parts. The first computer, referred to as the Trip Algorithm Computer may also control the multiplexer. Voting on each group of status inputs yielded by the trip algorithm computers is performed by the Vote Algorithm Computer. The conceptual disparities between hardwired reactor-protection systems and those employing computers also rise to a need for some new criteria. An important objective of these criteria, minimising the need for a failure-mode-and-effect-analysis of the computer software, but is achieved almost entirely by 'hardware' properties of the system: the systematic use of hardwired test inputs which cause excursions of the trip algorithms into the tripped state in a uniquely ordered but easily recognisable sequence, and the use of hardwired 'pattern recognition logic' which generates a dynamic 'healthy' stimulus for the shutdown actuators only in response to the unique sequence generated by the hardwired input signal pattern. The adoption of the proposed design criteria ensure not only failure-to-safety in the hardware but the elimination, or at least minimisation, of the dependence on the correct functioning of the computer software for the safety system. (auth)

  10. Transmission computed tomography data acquisition with a SPECT system

    International Nuclear Information System (INIS)

    Greer, K.L.; Harris, C.C.; Jaszczak, R.J.; Coleman, R.E.; Hedlund, L.W.; Floyd, C.E.; Manglos, S.H.

    1987-01-01

    Phantom and animal transmission computed tomography (TCT) scans were performed with a camera-based single photon emission computed tomography (SPECT) system to determine system linearity as a function of object density, which is important in the accurate determination of attenuation coefficients for SPECT attenuation compensation. Results from phantoms showed promise in providing a linear relationship in measuring density while maintaining good image resolution. Animal images were essentially free of artifacts. Transmission computed tomography scans derived from a SPECT system appear to have the potential to provide data suitable for incorporation in an attenuation compensation algorithm at relatively low (calculated) radiation doses to the subjects

  11. Intelligent systems and soft computing for nuclear science and industry

    International Nuclear Information System (INIS)

    Ruan, D.; D'hondt, P.; Govaerts, P.; Kerre, E.E.

    1996-01-01

    The second international workshop on Fuzzy Logic and Intelligent Technologies in Nuclear Science (FLINS) addresses topics related to intelligent systems and soft computing for nuclear science and industry. The proceedings contain 52 papers in different fields such as radiation protection, nuclear safety (human factors and reliability), safeguards, nuclear reactor control, production processes in the fuel cycle, dismantling, waste and disposal, decision making, and nuclear reactor control. A clear link is made between theory and applications of fuzzy logic such as neural networks, expert systems, robotics, man-machine interfaces, and decision-support techniques by using modern and advanced technologies and tools. The papers are grouped in three sections. The first section (Soft computing techniques) deals with basic tools to treat fuzzy logic, neural networks, genetic algorithms, decision-making, and software used for general soft-computing aspects. The second section (Intelligent engineering systems) includes contributions on engineering problems such as knowledge-based engineering, expert systems, process control integration, diagnosis, measurements, and interpretation by soft computing. The third section (Nuclear applications) focusses on the application of soft computing and intelligent systems in nuclear science and industry

  12. Honeywell modular automation system computer software documentation

    International Nuclear Information System (INIS)

    Cunningham, L.T.

    1997-01-01

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-21I

  13. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski

    2012-01-01

    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  14. Design of Computer Fault Diagnosis and Troubleshooting System ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-12-01

    Dec 1, 2013 ... 2Department of Computer Science. Cross River University ... owners in dealing with their computer problems especially when the time is limited and human expert is not ..... questions with the system responding to each of the ...

  15. Computer systems and networks status and perspectives

    CERN Document Server

    Zacharov, V

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor technology are reviewed and the associated peripheral components are discussed in the light of the prevailing rapid pace of developments. Particular emphasis is given to the impact of very large scale integrated circuitry in these developments. Computer networks are considered in some detail, including common-carrier and local-area networks, and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated.

  16. Computer systems and networks: Status and perspectives

    International Nuclear Information System (INIS)

    Zacharov, Z.

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor thechnology are reviewed and the associated peripheral components are disscussed in the light of the prevailling rapid pace of developments. Particular emphais is given to the impact of very large scale integrated circuitry in these developments. Computer networks, and considered in some detail, including comon-carrier and local-area networks and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated. (orig.)

  17. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  18. Evaluation of computer-aided detection and diagnosis systems.

    Science.gov (United States)

    Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping

    2013-08-01

    Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in

  19. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Science.gov (United States)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  20. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  1. Simulation study on the operating characteristics of the heat pipe for combined evaporative cooling of computer room air-conditioning system

    International Nuclear Information System (INIS)

    Han, Zongwei; Zhang, Yanqing; Meng, Xin; Liu, Qiankun; Li, Weiliang; Han, Yu; Zhang, Yanhong

    2016-01-01

    In order to improve the energy efficiency of air conditioning systems in computer rooms, this paper proposed a new concept of integrating evaporative cooling air-conditioning system with heat pipes. Based on a computer room in Shenyang, China, a mathematical model was built to perform transient simulations of the new system. The annual dynamical performance of the new system was then compared with a typical conventional computer room air-conditioning system. The result showed that the new integrated air-conditioning system had better energy efficiency, i.e. 31.31% reduction in energy consumption and 29.49% increase in COP (coefficient of performance), due to the adoption of evaporative condenser and the separate type heat pipe technology. Further study also revealed that the incorporated heat pipes enabled a 36.88% of decrease in the operation duration of the vapor compressor, and a 53.86% of reduction for the activation times of the compressor, which could lead to a longer lifespan of the compressor. The new integrated evaporative cooling air-conditioning system was also tested in different climate regions. It showed that the energy saving of the new system was greatly affected by climate, and it had the best effect in cold and dry regions like Shenyang with up to 31.31% energy saving. In some warm and humid climate regions like Guangzhou, the energy saving could be achieved up to 13.66%. - Highlights: • A novel combined air-conditioning system of computer room is constructed. • The performance of the system and conventional system is simulated and compared. • The applicability of the system in different climate regions is investigated.

  2. Computation of water hammer protection of modernized pumping station

    Directory of Open Access Journals (Sweden)

    Himr Daniel

    2014-03-01

    Finally, the pump trip was performed to verify if the system worked correctly. The test showed that pressure pulsations are lower (better than computation predicted. This discrepancy was further analysed.

  3. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically

  4. Investigation of an inventory calculation model for a solvent extraction system and the development of its computer programme - SEPHIS-J

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Ido, Masaru.

    1986-11-01

    In order to improve the applicability of near-real-time materials accountancy (N.R.T.MA) to a reprocessing plant, it is necessary to develop an estimation method for the nuclear material inventory at a solvent extraction system under operation. For designing the solvent extraction system, such computer codes as SEPHIS, SOLVEX and TRANSIENTS had been used. Accuracy of these codes in tracing operations and predicting inventories in the extraction system had been discussed. Then, much better codes, e.g., SEPHIS Mod4 and PUBG, were developed. Unfortunately, SEPHIS Mod4 was not available in countries other than the USA and PUBG was not suitable for use with a mini-computer which would be practical as a field computer because of quite a lot of computing time needed. The authors investigated an inventory estimation model compatible with PUBG in functions and developed the corresponding computer programme, SEPHIS-J, based on the SEPHIS Mod3 code, resulting in a third of computing time compared with PUBG. They also validated the programme by calculating a static state as well as a dynamic one of the solvent extraction process and by comparing them among the programme, SEPHIS Mod3 and PUBG. Using the programme, it was shown that the inventory changes due to changes of feed flow and concentration were not so small that they might be neglected although the changes of feed flow and concentration were within measurement errors. (author)

  5. Data entry system for INIS input using a personal computer

    International Nuclear Information System (INIS)

    Ishikawa, Masashi

    1990-01-01

    Input preparation for the INIS (International Nuclear Information System) has been performed by Japan Atomic Energy Research Institute since 1970. Instead of the input data preparation done by worksheets make out with the typewriters, new method with which data can be directly inputted into a diskette using personal computers is introduced. According to the popularization of personal computers and word processors, this system is easily applied to other system, so the outline and the future development on it are described. A shortcoming of this system is that spell-checking and data entry using authority files are hardly performed because of the limitation of hardware resources, and that data code conversion is needed because applied code systems between personal computer and main frame computer are quite different from each other. On the other hand, improving the timelyness of data entry is expected without duplication of keying. (author)

  6. New trends in networking, computing, e-learning, systems sciences, and engineering

    CERN Document Server

    Sobh, Tarek

    2015-01-01

    This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers form the conference proceedings of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  • Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; • Includes chapters in the most advanced areas of Computing, Informatics, Systems Sciences, and Engineering; • Accessible to a wide range of readership, including professors, researchers, practitioners and...

  7. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  8. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  9. The hack attack - Increasing computer system awareness of vulnerability threats

    Science.gov (United States)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  10. Development of a Computer Writing System Based on EOG.

    Science.gov (United States)

    López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian

    2017-06-26

    The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  11. Development of a Computer Writing System Based on EOG

    Directory of Open Access Journals (Sweden)

    Alberto López

    2017-06-01

    Full Text Available The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1 A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2 A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3 A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  12. ACCESS TO A COMPUTER SYSTEM. BETWEEN LEGAL PROVISIONS AND TECHNICAL REALITY

    Directory of Open Access Journals (Sweden)

    Maxim DOBRINOIU

    2016-05-01

    Full Text Available Nowadays, on a rise of cybersecurity incidents and a very complex IT&C environment, the national legal systems must adapt in order to properly address the new and modern forms of criminality in cyberspace. The illegal access to a computer system remains one of the most important cyber-related crimes due to its popularity but also from the perspective as being a door opened to computer data and sometimes a vehicle for other tech crimes. In the same time, the information society services slightly changed the IT paradigm and represent the new interface between users and systems. Is true that services rely on computer systems, but accessing services goes now beyond the simple accessing computer systems as commonly understood by most of the legislations. The article intends to explain other sides of the access related to computer systems and services, with the purpose to advance possible legal solutions to certain case scenarios.

  13. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  14. Computer aided instrumented Charpy test applied dynamic fracture toughness evaluation system

    International Nuclear Information System (INIS)

    Kobayashi, Toshiro; Niinomi, Mitsuo

    1986-01-01

    Micro computer aided data treatment system and personal computer aided data analysis system were applied to the traditional instrumented Charpy impact test system. The analysis of Charpy absorbed energy (E i , E p , E t ) and load (P y , P m ), and the evaluation of dynamic toughness through whole fracture process, i.e. J Id , J R curve and T mat was examined using newly developed computer aided instrumented Charpy impact test system. E i , E p , E t , P y and P m were effectively analyzed using moving average method and printed out automatically by micro computer aided data treatment system. J Id , J R curve and T mat could be measured by stop block test method. Then, J Id , J R curve and T mat were effectively estimated using compliance changing rate method and key curve method on the load-load point displacement curve of single fatigue cracked specimen by personal computer aided data analysis system. (author)

  15. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  16. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  17. Competitiveness in organizational integrated computer system project management

    Directory of Open Access Journals (Sweden)

    Zenovic GHERASIM

    2010-06-01

    Full Text Available The organizational integrated computer system project management aims at achieving competitiveness by unitary, connected and personalised treatment of the requirements for this type of projects, along with the adequate application of all the basic management, administration and project planning principles, as well as of the basic concepts of the organisational information management development. The paper presents some aspects of organizational computer systems project management competitiveness with the specific reference to some Romanian companies’ projects.

  18. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  19. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  20. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  1. Computer networks in future accelerator control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1977-03-01

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  2. A computer-aided surface roughness measurement system

    International Nuclear Information System (INIS)

    Hughes, F.J.; Schankula, M.H.

    1983-11-01

    A diamond stylus profilometer with computer-based data acquisitions/analysis system is being used to characterize surfaces of reactor components and materials, and to examine the effects of surface topography on thermal contact conductance. The current system is described; measurement problems and system development are discussed in general terms and possible future improvements are outlined

  3. An intelligent multi-media human-computer dialogue system

    Science.gov (United States)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  4. Computer aided detection system for lung cancer using computer tomography scans

    Science.gov (United States)

    Mahesh, Shanthi; Rakesh, Spoorthi; Patil, Vidya C.

    2018-04-01

    Lung Cancer is a disease can be defined as uncontrolled cell growth in tissues of the lung. If we detect the Lung Cancer in its early stage, then that could be the key of its cure. In this work the non-invasive methods are studied for assisting in nodule detection. It supplies a Computer Aided Diagnosis System (CAD) for early detection of lung cancer nodules from the Computer Tomography (CT) images. CAD system is the one which helps to improve the diagnostic performance of radiologists in their image interpretations. The main aim of this technique is to develop a CAD system for finding the lung cancer using the lung CT images and classify the nodule as Benign or Malignant. For classifying cancer cells, SVM classifier is used. Here, image processing techniques have been used to de-noise, to enhance, for segmentation and edge detection of an image is used to extract the area, perimeter and shape of nodule. The core factors of this research are Image quality and accuracy.

  5. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  6. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  7. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  8. The computational future for climate and Earth system models: on the path to petaflop and beyond.

    Science.gov (United States)

    Washington, Warren M; Buja, Lawrence; Craig, Anthony

    2009-03-13

    The development of the climate and Earth system models has had a long history, starting with the building of individual atmospheric, ocean, sea ice, land vegetation, biogeochemical, glacial and ecological model components. The early researchers were much aware of the long-term goal of building the Earth system models that would go beyond what is usually included in the climate models by adding interactive biogeochemical interactions. In the early days, the progress was limited by computer capability, as well as by our knowledge of the physical and chemical processes. Over the last few decades, there has been much improved knowledge, better observations for validation and more powerful supercomputer systems that are increasingly meeting the new challenges of comprehensive models. Some of the climate model history will be presented, along with some of the successes and difficulties encountered with present-day supercomputer systems.

  9. The CESR computer control system

    International Nuclear Information System (INIS)

    Helmke, R.G.; Rice, D.H.; Strohman, C.

    1986-01-01

    The control system for the Cornell Electron Storage Ring (CESR) has functioned satisfactorily since its implementation in 1979. Key characteristics are fast tuning response, almost exclusive use of FORTRAN as a programming language, and efficient coordinated ramping of CESR guide field elements. This original system has not, however, been able to keep pace with the increasing complexity of operation of CESR associated with performance upgrades. Limitations in address space, expandability, access to data system-wide, and program development impediments have prompted the undertaking of a major upgrade. The system under development accomodates up to 8 VAX computers for all applications programs. The database and communications semaphores reside in a shared multi-ported memory, and each hardware interface bus is controlled by a dedicated 32 bit micro-processor in a VME based system. (orig.)

  10. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  11. Innovations and advances in computing, informatics, systems sciences, networking and engineering

    CERN Document Server

    Elleithy, Khaled

    2015-01-01

    Innovations and Advances in Computing, Informatics, Systems Sciences, Networking and Engineering  This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers from the conference proceedings of the Eighth and some selected papers of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2012 & CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  ·       Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; ·       Includes chapters in the most a...

  12. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  13. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  14. Computing the optimal path in stochastic dynamical systems

    International Nuclear Information System (INIS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-01-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  15. EDDYMULT: a computing system for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1989-03-01

    A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)

  16. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...

  17. Adaptive resource allocation scheme using sliding window subchannel gain computation: context of OFDMA wireless mobiles systems

    International Nuclear Information System (INIS)

    Khelifa, F.; Samet, A.; Ben Hassen, W.; Afif, M.

    2011-01-01

    Multiuser diversity combined with Orthogonal Frequency Division Multiple Access (OFDMA) are a promising technique for achieving high downlink capacities in new generation of cellular and wireless network systems. The total capacity of OFDMA based-system is maximized when each subchannel is assigned to the mobile station with the best channel to noise ratio for that subchannel with power is uniformly distributed between all subchannels. A contiguous method for subchannel construction is adopted in IEEE 802.16 m standard in order to reduce OFDMA system complexity. In this context, new subchannel gain computation method, can contribute, jointly with optimal assignment subchannel to maximize total system capacity. In this paper, two new methods have been proposed in order to achieve a better trade-off between fairness and efficiency use of resources. Numerical results show that proposed algorithms provide low complexity, higher total system capacity and fairness among users compared to others recent methods.

  18. The use of CAMAC with small computers in the TRIUMF control system

    International Nuclear Information System (INIS)

    Gurd, D.P.; Heywood, D.R.; Johnson, R.R.

    1975-08-01

    The TRIUMF control system uses several small computers. This allows tasks to be partitioned in hardware rather than by a complex operating system. This flexibility was especially convenient during the developmental stages of TRIUMF. The multi-mini approach also improves mean time to repair. All control system computers are to be interfaced simultaneously to a single CAMAC system of 35 crates on seven branches. Other computers, belonging to separate systems, communicate with the control system via parallel CAMAC-to-CAMAC links. Modularity at both the computer and controller levels, combined with CAMAC multisourcing, has allowed the introduction of considerable redundancy, thereby increasing overall system reliability. (author)

  19. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  20. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-01-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  1. An Interactive Computer-Based Circulation System: Design and Development

    Directory of Open Access Journals (Sweden)

    James S. Aagaard

    1972-03-01

    Full Text Available An on-line computer-based circulation control system has been installed at the Northwestern University library. Features of the system include self-service book charge, remote terminal inquiry and update, and automatic production of notices for call-ins and books available. Fine notices are also prepared daily and overdue notices weekly. Important considerations in the design of the system were to minimize costs of operation and to include technical services functions eventually. The system operates on a relatively small computer in a multiprogrammed mode.

  2. Design layout for gas monitoring system II, GMS-2, computer system

    International Nuclear Information System (INIS)

    Vo, V.; Philipp, B.L.; Manke, M.P.

    1995-01-01

    This document provides a general overview of the computer systems software that perform the data acquisition and control for the 241-SY-101 Gas Monitoring System II (GMS-2). It outlines the system layout, and contains descriptions of components and the functions they perform. The GMS-2 system was designed and implemented by Los Alamos National Laboratory and supplied to Westinghouse Hanford Company

  3. Application of ubiquitous computing in personal health monitoring systems.

    Science.gov (United States)

    Kunze, C; Grossmann, U; Stork, W; Müller-Glaser, K D

    2002-01-01

    A possibility to significantly reduce the costs of public health systems is to increasingly use information technology. The Laboratory for Information Processing Technology (ITIV) at the University of Karlsruhe is developing a personal health monitoring system, which should improve health care and at the same time reduce costs by combining micro-technological smart sensors with personalized, mobile computing systems. In this paper we present how ubiquitous computing theory can be applied in the health-care domain.

  4. Computer Aided Design System for Developing Musical Fountain Programs

    Institute of Scientific and Technical Information of China (English)

    刘丹; 张乃尧; 朱汉城

    2003-01-01

    A computer aided design system for developing musical fountain programs was developed with multiple functions such as intelligent design, 3-D animation, manual modification and synchronized motion to make the development process more efficient. The system first analyzed the music form and sentiment using many basic features of the music to select a basic fountain program. Then, this program is simulated with 3-D animation and modified manually to achieve the desired results. Finally, the program is transformed to a computer control program to control the musical fountain in time with the music. A prototype system for the musical fountain was also developed. It was tested with many styles of music and users were quite satisfied with its performance. By integrating various functions, the proposed computer aided design system for developing musical fountain programs greatly simplified the design of the musical fountain programs.

  5. Semantically optiMize the dAta seRvice operaTion (SMART) system for better data discovery and access

    Science.gov (United States)

    Yang, C.; Huang, T.; Armstrong, E. M.; Moroni, D. F.; Liu, K.; Gui, Z.

    2013-12-01

    Abstract: We present a Semantically optiMize the dAta seRvice operaTion (SMART) system for better data discovery and access across the NASA data systems, Global Earth Observation System of Systems (GEOSS) Clearinghouse and Data.gov to facilitate scientists to select Earth observation data that fit better their needs in four aspects: 1. Integrating and interfacing the SMART system to include the functionality of a) semantic reasoning based on Jena, an open source semantic reasoning engine, b) semantic similarity calculation, c) recommendation based on spatiotemporal, semantic, and user workflow patterns, and d) ranking results based on similarity between search terms and data ontology. 2. Collaborating with data user communities to a) capture science data ontology and record relevant ontology triple stores, b) analyze and mine user search and download patterns, c) integrate SMART into metadata-centric discovery system for community-wide usage and feedback, and d) customizing data discovery, search and access user interface to include the ranked results, recommendation components, and semantic based navigations. 3. Laying the groundwork to interface the SMART system with other data search and discovery systems as an open source data search and discovery solution. The SMART systems leverages NASA, GEO, FGDC data discovery, search and access for the Earth science community by enabling scientists to readily discover and access data appropriate to their endeavors, increasing the efficiency of data exploration and decreasing the time that scientists must spend on searching, downloading, and processing the datasets most applicable to their research. By incorporating the SMART system, it is a likely aim that the time being devoted to discovering the most applicable dataset will be substantially reduced, thereby reducing the number of user inquiries and likewise reducing the time and resources expended by a data center in addressing user inquiries. Keywords: EarthCube; ECHO

  6. Requirements for SSC central computing staffing (conceptual)

    International Nuclear Information System (INIS)

    Pfister, J.

    1985-01-01

    Given a computation center with --10,000 MIPS supporting --1,000 users, what are the staffing requirements? The attempt in this paper is to list the functions and staff size required in a central computing or centrally supported computing complex. The organization assumes that although considerable computing power would exist (mostly for online) in the four interaction regions (IR) that there are functions/capabilities better performed outside the IR and in this model at a ''central computing facility.'' What follows is one staffing approach, not necessarily optimal, with certain assumptions about numbers of computer systems, media, networks and system controls, that is, one would get the best technology available. Thus, it is speculation about what the technology may bring and what it takes to operate it. From an end user support standpoint it is less clear, given the geography of an SSC, where and what the consulting support should look like and its location

  7. Natural computing for mechanical systems research: A tutorial overview

    Science.gov (United States)

    Worden, Keith; Staszewski, Wieslaw J.; Hensman, James J.

    2011-01-01

    A great many computational algorithms developed over the past half-century have been motivated or suggested by biological systems or processes, the most well-known being the artificial neural networks. These algorithms are commonly grouped together under the terms soft or natural computing. A property shared by most natural computing algorithms is that they allow exploration of, or learning from, data. This property has proved extremely valuable in the solution of many diverse problems in science and engineering. The current paper is intended as a tutorial overview of the basic theory of some of the most common methods of natural computing as they are applied in the context of mechanical systems research. The application of some of the main algorithms is illustrated using case studies. The paper also attempts to give some indication as to which of the algorithms emerging now from the machine learning community are likely to be important for mechanical systems research in the future.

  8. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  9. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  10. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  11. Computer Algebra Systems in Undergraduate Instruction.

    Science.gov (United States)

    Small, Don; And Others

    1986-01-01

    Computer algebra systems (such as MACSYMA and muMath) can carry out many of the operations of calculus, linear algebra, and differential equations. Use of them with sketching graphs of rational functions and with other topics is discussed. (MNS)

  12. The Bucket System – A computer mediated signaling system for group improvisation

    DEFF Research Database (Denmark)

    Dahlstedt, Palle; Nilsson, Per Anders; Robair, Gino

    2015-01-01

    The Bucket System is a new system for computer-mediated ensemble improvisation, designed by improvisers for improvisers. Coming from a tradition of structured free ensemble improvisation practices (comprovisation), influenced by post-WW2 experimental music practices, it is a signaling system...

  13. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    CUNNINGHAM, L.T.

    1999-01-01

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2

  14. MENTAL SHIFT TOWARDS SYSTEMS THINKING SKILLS IN COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    MILDEOVÁ, Stanislava

    2012-03-01

    Full Text Available When seeking solutions to current problems in the field of computer science – and other fields – we encounter situations where traditional approaches no longer bring the desired results. Our cognitive skills also limit the implementation of reliable mental simulation within the basic set of relations. The world around us is becoming more complex and mutually interdependent, and this is reflected in the demands on computer support. Thus, in today’s education and science in the field of computer science and all other disciplines and areas of life need to address the issue of the paradigm shift, which is generally accepted by experts. The goal of the paper is to present the systems thinking that facilitates and extends the understanding of the world through relations and linkages. Moreover, the paper introduces the essence of systems thinking and the possibilities to achieve mental a shift toward systems thinking skills. At the same time, the link between systems thinking and functional literacy is presented. We adopted the “Bathtub Test” from the variety of systems thinking tests that allow people to assess the understanding of basic systemic concepts, in order to assess the level of systems thinking. University students (potential information managers were the examined subjects of the examination of systems thinking that was conducted over a longer time period and whose aim was to determine the status of systems thinking. . The paper demonstrates that some pedagogical concepts and activities, in our case the subject of System Dynamics that leads to the appropriate integration of systems thinking in education. There is some evidence that basic knowledge of system dynamics and systems thinking principles will affect students, and their thinking will contribute to an improved approach to solving problems of computer science both in theory and practice.

  15. Interactive Rhythm Learning System by Combining Tablet Computers and Robots

    Directory of Open Access Journals (Sweden)

    Chien-Hsing Chou

    2017-03-01

    Full Text Available This study proposes a percussion learning device that combines tablet computers and robots. This device comprises two systems: a rhythm teaching system, in which users can compose and practice rhythms by using a tablet computer, and a robot performance system. First, teachers compose the rhythm training contents on the tablet computer. Then, the learners practice these percussion exercises by using the tablet computer and a small drum set. The teaching system provides a new and user-friendly score editing interface for composing a rhythm exercise. It also provides a rhythm rating function to facilitate percussion training for children and improve the stability of rhythmic beating. To encourage children to practice percussion exercises, a robotic performance system is used to interact with the children; this system can perform percussion exercises for students to listen to and then help them practice the exercise. This interaction enhances children’s interest and motivation to learn and practice rhythm exercises. The results of experimental course and field trials reveal that the proposed system not only increases students’ interest and efficiency in learning but also helps them in understanding musical rhythms through interaction and composing simple rhythms.

  16. Status of integration of small computers into NDE systems

    International Nuclear Information System (INIS)

    Dau, G.J.; Behravesh, M.M.

    1988-01-01

    Introduction of computers in nondestructive evaluations (NDE) has enabled data acquisition devices to provide a more thorough and complete coverage in the scanning process, and has aided human inspectors in their data analysis and decision making efforts. The price and size/weight of small computers, coupled with recent increases in processing and storage capacity, have made small personal computers (PC's) the most viable platform for NDE equipment. Several NDE systems using minicomputers and newer PC-based systems, capable of automatic data acquisition, and knowledge-based analysis of the test data, have been field tested in the nuclear power plant environment and are currently available through commercial sources. While computers have been in common use for several NDE methods during the last few years, their greatest impact, however, has been on ultrasonic testing. This paper discusses the evolution of small computers and their integration into the ultrasonic testing process

  17. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  18. Securing Cloud Computing from Different Attacks Using Intrusion Detection Systems

    Directory of Open Access Journals (Sweden)

    Omar Achbarou

    2017-03-01

    Full Text Available Cloud computing is a new way of integrating a set of old technologies to implement a new paradigm that creates an avenue for users to have access to shared and configurable resources through internet on-demand. This system has many common characteristics with distributed systems, hence, the cloud computing also uses the features of networking. Thus the security is the biggest issue of this system, because the services of cloud computing is based on the sharing. Thus, a cloud computing environment requires some intrusion detection systems (IDSs for protecting each machine against attacks. The aim of this work is to present a classification of attacks threatening the availability, confidentiality and integrity of cloud resources and services. Furthermore, we provide literature review of attacks related to the identified categories. Additionally, this paper also introduces related intrusion detection models to identify and prevent these types of attacks.

  19. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  20. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    Science.gov (United States)

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  1. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  2. Production Management System for AMS Computing Centres

    Science.gov (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  3. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  4. Slab cooling system design using computer simulation

    NARCIS (Netherlands)

    Lain, M.; Zmrhal, V.; Drkal, F.; Hensen, J.L.M.

    2007-01-01

    For a new technical library building in Prague computer simulations were carried out to help design of slab cooling system and optimize capacity of chillers. In the paper is presented concept of new technical library HVAC system, the model of the building, results of the energy simulations for

  5. Static Load Balancing Algorithms In Cloud Computing Challenges amp Solutions

    Directory of Open Access Journals (Sweden)

    Nadeem Shah

    2015-08-01

    Full Text Available Abstract Cloud computing provides on-demand hosted computing resources and services over the Internet on a pay-per-use basis. It is currently becoming the favored method of communication and computation over scalable networks due to numerous attractive attributes such as high availability scalability fault tolerance simplicity of management and low cost of ownership. Due to the huge demand of cloud computing efficient load balancing becomes critical to ensure that computational tasks are evenly distributed across servers to prevent bottlenecks. The aim of this review paper is to understand the current challenges in cloud computing primarily in cloud load balancing using static algorithms and finding gaps to bridge for more efficient static cloud load balancing in the future. We believe the ideas suggested as new solution will allow researchers to redesign better algorithms for better functionalities and improved user experiences in simple cloud systems. This could assist small businesses that cannot afford infrastructure that supports complex amp dynamic load balancing algorithms.

  6. The computer aided education and training system for accident management

    International Nuclear Information System (INIS)

    Yoneyama, Mitsuru; Masuda, Takahiro; Kubota, Ryuji; Fujiwara, Tadashi; Sakuma, Hitoshi

    2000-01-01

    Under severe accident conditions of a nuclear power plant, plant operators and technical support center (TSC) staffs will be under a amount of stress. Therefore, those individuals responsible for managing the plant should promote their understanding about the accident management and operations. Moreover, it is also important to train in ordinary times, so that they can carry out accident management operations effectively on severe accidents. Therefore, the education and training system which works on personal computers was developed by Japanese BWR group (Tokyo Electric Power Co.,Inc., Tohoku Electric Power Co. ,Inc., Chubu Electric Power Co. ,Inc., Hokuriku Electric Power Co.,Inc., Chugoku Electric Power Co.,Inc., Japan Atomic Power Co.,Inc.), and Hitachi, Ltd. The education and training system is composed of two systems. One is computer aided instruction (CAI) education system and the other is education and training system with a computer simulation. Both systems are designed to execute on MS-Windows(R) platform of personal computers. These systems provide plant operators and technical support center staffs with an effective education and training tool for accident management. TEPCO used the simulation system for the emergency exercise assuming the occurrence of hypothetical severe accident, and have performed an effective exercise in March, 2000. (author)

  7. Better data for teachers, better data for learners, better patient care: college-wide assessment at Michigan State University's College of Human Medicine

    Directory of Open Access Journals (Sweden)

    Aron C. Sousa

    2011-01-01

    Full Text Available When our school organized the curriculum around a core set of medical student competencies in 2004, it was clear that more numerous and more varied student assessments were needed. To oversee a systematic approach to the assessment of medical student competencies, the Office of College-wide Assessment was established, led by the Associate Dean of College-wide Assessment. The mission of the Office is to ‘facilitate the development of a seamless assessment system that drives a nimble, competency-based curriculum across the spectrum of our educational enterprise.’ The Associate Dean coordinates educational initiatives, developing partnerships to solve common problems, and enhancing synergy within the College. The Office also works to establish data collection and feedback loops to guide rational intervention and continuous curricular improvement. Aside from feedback, implementing a systems approach to assessment provides a means for identifying performance gaps, promotes continuity from undergraduate medical education to practice, and offers a rationale for some assessments to be located outside of courses and clerkships. Assessment system design, data analysis, and feedback require leadership, a cooperative faculty team with medical education expertise, and institutional support. The guiding principle is ‘Better Data for Teachers, Better Data for Learners, Better Patient Care.’ Better data empowers faculty to become change agents, learners to create evidence-based improvement plans and increases accountability to our most important stakeholders, our patients.

  8. A computer-aided continuous assessment system

    Directory of Open Access Journals (Sweden)

    B. C.H. Turton

    1996-12-01

    Full Text Available Universities within the United Kingdom have had to cope with a massive expansion in undergraduate student numbers over the last five years (Committee of Scottish University Principals, 1993; CVCP Briefing Note, 1994. In addition, there has been a move towards modularization and a closer monitoring of a student's progress throughout the year. Since the price/performance ratio of computer systems has continued to improve, Computer- Assisted Learning (CAL has become an attractive option. (Fry, 1990; Benford et al, 1994; Laurillard et al, 1994. To this end, the Universities Funding Council (UFQ has funded the Teaching and Learning Technology Programme (TLTP. However universities also have a duty to assess as well as to teach. This paper describes a Computer-Aided Assessment (CAA system capable of assisting in grading students and providing feedback. In this particular case, a continuously assessed course (Low-Level Languages of over 100 students is considered. Typically, three man-days are required to mark one assessed piece of coursework from the students in this class. Any feedback on how the questions were dealt with by the student are of necessity brief. Most of the feedback is provided in a tutorial session that covers the pitfalls encountered by the majority of the students.

  9. Computation system for nuclear reactor core analysis

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals

  10. Computer programs simplify optical system analysis

    Science.gov (United States)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  11. Development of a proton Computed Tomography Detector System

    Energy Technology Data Exchange (ETDEWEB)

    Naimuddin, Md. [Delhi U.; Coutrakon, G. [Northern Illinois U.; Blazey, G. [Northern Illinois U.; Boi, S. [Northern Illinois U.; Dyshkant, A. [Northern Illinois U.; Erdelyi, B. [Northern Illinois U.; Hedin, D. [Northern Illinois U.; Johnson, E. [Northern Illinois U.; Krider, J. [Northern Illinois U.; Rukalin, V. [Northern Illinois U.; Uzunyan, S. A. [Northern Illinois U.; Zutshi, V. [Northern Illinois U.; Fordt, R. [Fermilab; Sellberg, G. [Fermilab; Rauch, J. E. [Fermilab; Roman, M. [Fermilab; Rubinov, P. [Fermilab; Wilson, P. [Fermilab

    2016-02-04

    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantegeous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  12. Control system upgrades support better plant economics

    International Nuclear Information System (INIS)

    De Grosbois, J.; Hepburn, A.; Storey, H.; Basso, R.; Kumar, V.

    2002-01-01

    This paper (second in the series, see [1]) provides insight on how nuclear plants can achieve better efficiencies and reduced operations and maintenance (O and M) costs through focused control system upgrades. An understanding of this relationship is necessary to properly assess the economics of plant refurbishment decisions. Traditional economic feasibility assessment methods such as benefit cost analysis (BCA), internal rate of return (IRR), benefit cost ratios (B/C), or payback analysis are often performed without full consideration of project alternatives, quantified benefits, and life cycle costing. Consideration must be given to not only capital cost and project risk, but also to the potential economic benefits of new technology and added functionality offered by plant upgrades. Recent experience shows that if upgrades are focused on priority objectives, and are effectively implemented, they can deliver significant payback over the life of the plant, sometimes orders of magnitude higher than their initial capital cost. The following discussion explores some of the key issues and rationale behind upgrade decisions and their impact on improved plant efficiency and reduced O and M costs. A subsequent paper will explain how the justification for these improvements can be captured in an economic analysis and feasibility study to support strategic decision-making in a plant refurbishment context. (author)

  13. Computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9-in. NaI(Tl) crystal containing a 3.25-in. deep by 3.5-in. diameter well. This gamma detection system is controlled by a mini-computer with a dual floppy disk storage medium. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing

  14. Applications of computer based safety systems in Korea nuclear power plants

    International Nuclear Information System (INIS)

    Won Young Yun

    1998-01-01

    With the progress of computer technology, the applications of computer based safety systems in Korea nuclear power plants have increased rapidly in recent decades. The main purpose of this movement is to take advantage of modern computer technology so as to improve the operability and maintainability of the plants. However, in fact there have been a lot of controversies on computer based systems' safety between the regulatory body and nuclear utility in Korea. The Korea Institute of Nuclear Safety (KINS), technical support organization for nuclear plant licensing, is currently confronted with the pressure to set up well defined domestic regulatory requirements from this aspect. This paper presents the current status and the regulatory activities related to the applications of computer based safety systems in Korea. (author)

  15. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  16. Entrepreneurial Health Informatics for Computer Science and Information Systems Students

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti

    2014-01-01

    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  17. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  18. Imprecise results: Utilizing partial computations in real-time systems

    Science.gov (United States)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  19. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  20. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  1. Application engineering for process computer systems

    International Nuclear Information System (INIS)

    Mueller, K.

    1975-01-01

    The variety of tasks for process computers in nuclear power stations necessitates the centralization of all production stages from the planning stage to the delivery of the finished process computer system (PRA) to the user. This so-called 'application engineering' comprises all of the activities connected with the application of the PRA: a) establishment of the PRA concept, b) project counselling, c) handling of offers, d) handling of orders, e) internal handling of orders, f) technical counselling, g) establishing of parameters, h) monitoring deadlines, i) training of customers, j) compiling an operation manual. (orig./AK) [de

  2. Computer-controlled environmental test systems - Criteria for selection, installation, and maintenance.

    Science.gov (United States)

    Chapman, C. P.

    1972-01-01

    Applications for presently marketed, new computer-controlled environmental test systems are suggested. It is shown that capital costs of these systems follow an exponential cost function curve that levels out as additional applications are implemented. Some test laboratory organization changes are recommended in terms of new personnel requirements, and facility modification are considered in support of a computer-controlled test system. Software for computer-controlled test systems are discussed, and control loop speed constraints are defined for real-time control functions. Suitable input and output devices and memory storage device tradeoffs are also considered.

  3. Computer data-acquisition and control system for Thomson-scattering measurements

    International Nuclear Information System (INIS)

    Stewart, K.A.; Foskett, R.D.; Kindsfather, R.R.; Lazarus, E.A.; Thomas, C.E.

    1983-03-01

    The Thomson-Scattering Diagnostic System (SCATPAK II) used to measure the electron temperature and density in the Impurity Study Experiment is interfaced to a Perkin-Elmer 8/32 computer that operates under the OS/32 operating system. The calibration, alignment, and operation of this diagnostic are all under computer control. Data acquired from 106 photomultiplier tubes installed on 15 spectrometers are transmitted to the computer by eighteen 12-channel, analog-to-digital integrators along a CAMAC serial highway. With each laser pulse, 212 channels of data are acquired: 106 channels of signal plus background and 106 channels of background only. Extensive use of CAMAC instrumentation enables large amounts of data to be acquired and control processes to be performed in a time-dependent environment. The Thomson-scattering computer system currently operates in three modes: user interaction and control, data acquisition and transmission, and data analysis. This paper discusses the development and implementation of this system as well as data storage and retrieval

  4. A data management system to enable urgent natural disaster computing

    Science.gov (United States)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe

  5. Informational-computer system for the neutron spectra analysis

    International Nuclear Information System (INIS)

    Berzonis, M.A.; Bondars, H.Ya.; Lapenas, A.A.

    1979-01-01

    In this article basic principles of the build-up of the informational-computer system for the neutron spectra analysis on a basis of measured reaction rates are given. The basic data files of the system, needed software and hardware for the system operation are described

  6. Brain systems for probabilistic and dynamic prediction: computational specificity and integration.

    Directory of Open Access Journals (Sweden)

    Jill X O'Reilly

    2013-09-01

    Full Text Available A computational approach to functional specialization suggests that brain systems can be characterized in terms of the types of computations they perform, rather than their sensory or behavioral domains. We contrasted the neural systems associated with two computationally distinct forms of predictive model: a reinforcement-learning model of the environment obtained through experience with discrete events, and continuous dynamic forward modeling. By manipulating the precision with which each type of prediction could be used, we caused participants to shift computational strategies within a single spatial prediction task. Hence (using fMRI we showed that activity in two brain systems (typically associated with reward learning and motor control could be dissociated in terms of the forms of computations that were performed there, even when both systems were used to make parallel predictions of the same event. A region in parietal cortex, which was sensitive to the divergence between the predictions of the models and anatomically connected to both computational networks, is proposed to mediate integration of the two predictive modes to produce a single behavioral output.

  7. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  8. Supporting Privacy of Computations in Mobile Big Data Systems

    Directory of Open Access Journals (Sweden)

    Sriram Nandha Premnath

    2016-05-01

    Full Text Available Cloud computing systems enable clients to rent and share computing resources of third party platforms, and have gained widespread use in recent years. Numerous varieties of mobile, small-scale devices such as smartphones, red e-health devices, etc., across users, are connected to one another through the massive internetwork of vastly powerful servers on the cloud. While mobile devices store “private information” of users such as location, payment, health data, etc., they may also contribute “semi-public information” (which may include crowdsourced data such as transit, traffic, nearby points of interests, etc. for data analytics. In such a scenario, a mobile device may seek to obtain the result of a computation, which may depend on its private inputs, crowdsourced data from other mobile devices, and/or any “public inputs” from other servers on the Internet. We demonstrate a new method of delegating real-world computations of resource-constrained mobile clients using an encrypted program known as the garbled circuit. Using the garbled version of a mobile client’s inputs, a server in the cloud executes the garbled circuit and returns the resulting garbled outputs. Our system assures privacy of the mobile client’s input data and output of the computation, and also enables the client to verify that the evaluator actually performed the computation. We analyze the complexity of our system. We measure the time taken to construct the garbled circuit as well as evaluate it for varying number of servers. Using real-world data, we evaluate our system for a practical, privacy preserving search application that locates the nearest point of interest for the mobile client to demonstrate feasibility.

  9. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  10. Computer aided systems human engineering: A hypermedia tool

    Science.gov (United States)

    Boff, Kenneth R.; Monk, Donald L.; Cody, William J.

    1992-01-01

    The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.

  11. Towards ubiquitous access of computer-assisted surgery systems.

    Science.gov (United States)

    Liu, Hui; Lufei, Hanping; Shi, Weishong; Chaudhary, Vipin

    2006-01-01

    Traditional stand-alone computer-assisted surgery (CAS) systems impede the ubiquitous and simultaneous access by multiple users. With advances in computing and networking technologies, ubiquitous access to CAS systems becomes possible and promising. Based on our preliminary work, CASMIL, a stand-alone CAS server developed at Wayne State University, we propose a novel mobile CAS system, UbiCAS, which allows surgeons to retrieve, review and interpret multimodal medical images, and to perform some critical neurosurgical procedures on heterogeneous devices from anywhere at anytime. Furthermore, various optimization techniques, including caching, prefetching, pseudo-streaming-model, and compression, are used to guarantee the QoS of the UbiCAS system. UbiCAS enables doctors at remote locations to actively participate remote surgeries, share patient information in real time before, during, and after the surgery.

  12. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  13. Analog system for computing sparse codes

    Science.gov (United States)

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  14. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  15. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  16. Are women better than men at multi-tasking?

    OpenAIRE

    Stoet, Gijsbert; O’Connor, Daryl B.; Conner, Mark; Laws, Keith R.

    2013-01-01

    Background: There seems to be a common belief that women are better in multi-tasking than men, but there is practically no scientific research on this topic. Here, we tested whether women have better multi-tasking skills than men.\\ensuremath\\ensuremath Methods: In Experiment 1, we compared performance of 120 women and 120 men in a computer-based task-switching paradigm. In Experiment 2, we compared a different group of 47 women and 47 men on "paper-and-pencil" multi-tasking tests.\\ensuremath\\...

  17. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    Science.gov (United States)

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel

    2015-01-01

    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  18. Personal computer based home automation system

    OpenAIRE

    Hellmuth, George F.

    1993-01-01

    The systems engineering process is applied in the development of the preliminary design of a home automation communication protocol. The objective of the communication protocol is to provide a means for a personal computer to communicate with adapted appliances in the home. A needs analysis is used to ascertain that a need exist for a home automation system. Numerous design alternatives are suggested and evaluated to determine the best possible protocol design. Coaxial cable...

  19. Computer-aided visualization and analysis system for sequence evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  20. A virtual computing infrastructure for TS-CV SCADA systems

    CERN Document Server

    Poulsen, S

    2008-01-01

    In modern data centres, it is an emerging trend to operate and manage computers as software components or logical resources and not as physical machines. This technique is known as â€ワvirtualisation” and the new computers are referred to as â€ワvirtual machines” (VMs). Multiple VMs can be consolidated on a single hardware platform and managed in ways that are not possible with physical machines. However, this is not yet widely practiced for control system deployment. In TS-CV, a collection of VMs or a â€ワvirtual infrastructure” is installed since 2005 for SCADA systems, PLC program development, and alarm transmission. This makes it possible to consolidate distributed, heterogeneous operating systems and applications on a limited number of standardised high-performance servers in the Central Control Room (CCR). More generally, virtualisation assists in offering continuous computing services for controls and maintaining performance and assuring quality. Implementing our systems in a vi...

  1. Verifying Stability of Dynamic Soft-Computing Systems

    Science.gov (United States)

    Wen, Wu; Napolitano, Marcello; Callahan, John

    1997-01-01

    Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.

  2. Improving human reliability through better nuclear power plant system design. Final report

    International Nuclear Information System (INIS)

    Golay, Michael W.

    1998-01-01

    Increasing task complexity is claimed to be responsible for causing human operating errors, while a significant number of system failures are due to operating errors. An experimental study reported here was conducted to isolate varying task complexity as an important factor affecting human performance quality. Earlier work concerning problems of nuclear power plants has shown that human capability declined when dealing with increasing system complexity. The goal of this study was to investigate further the relationship between human operator performance quality and the complexity of tasks served to human operators. This was done by using a simple, interactive, dynamic and generalizable computer model to simulate the behavior of a human-operated dynamic fluid system. Twenty-two human subjects participated

  3. Integrated Geo Hazard Management System in Cloud Computing Technology

    Science.gov (United States)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  4. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    Science.gov (United States)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

  5. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  6. Communication system between BESM-6 and M-6000 computers

    International Nuclear Information System (INIS)

    Bobrov, V.G.; Bogomolov, M.N.; Zavrazhnov, G.N.

    1974-01-01

    A two-way communication system of BESM-6 and M-6000 electronic computers is described. Exchange between the machines is effected by 16-digit words. The communication system is designed to transfer information obtained in scanning pictures on the meter PSP-2 which is controlled by machine M-6000. The exchange of information is coincident with the operation of processors. The maximum flow of data through the communication system is 75x10 3 words per second for M-6000 electronic computer format. The line length is about 200 m

  7. SD-CAS: Spin Dynamics by Computer Algebra System.

    Science.gov (United States)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    Science.gov (United States)

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  9. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  10. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  11. Microeconomic theory and computation applying the maxima open-source computer algebra system

    CERN Document Server

    Hammock, Michael R

    2014-01-01

    This book provides a step-by-step tutorial for using Maxima, an open-source multi-platform computer algebra system, to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques.

  12. Distributed computer control system for reactor optimization

    International Nuclear Information System (INIS)

    Williams, A.H.

    1983-01-01

    At the Oldbury power station a prototype distributed computer control system has been installed. This system is designed to support research and development into improved reactor temperature control methods. This work will lead to the development and demonstration of new optimal control systems for improvement of plant efficiency and increase of generated output. The system can collect plant data from special test instrumentation connected to dedicated scanners and from the station's existing data processing system. The system can also, via distributed microprocessor-based interface units, make adjustments to the desired reactor channel gas exit temperatures. The existing control equipment will then adjust the height of control rods to maintain operation at these temperatures. The design of the distributed system is based on extensive experience with distributed systems for direct digital control, operator display and plant monitoring. The paper describes various aspects of this system, with particular emphasis on: (1) the hierarchal system structure; (2) the modular construction of the system to facilitate installation, commissioning and testing, and to reduce maintenance to module replacement; (3) the integration of the system into the station's existing data processing system; (4) distributed microprocessor-based interfaces to the reactor controls, with extensive security facilities implemented by hardware and software; (5) data transfer using point-to-point and bussed data links; (6) man-machine communication based on VDUs with computer input push-buttons and touch-sensitive screens; and (7) the use of a software system supporting a high-level engineer-orientated programming language, at all levels in the system, together with comprehensive data link management

  13. Employing subgoals in computer programming education

    Science.gov (United States)

    Margulieux, Lauren E.; Catrambone, Richard; Guzdial, Mark

    2016-01-01

    The rapid integration of technology into our professional and personal lives has left many education systems ill-equipped to deal with the influx of people seeking computing education. To improve computing education, we are applying techniques that have been developed for other procedural fields. The present study applied such a technique, subgoal labeled worked examples, to explore whether it would improve programming instruction. The first two experiments, conducted in a laboratory, suggest that the intervention improves undergraduate learners' problem-solving performance and affects how learners approach problem-solving. The third experiment demonstrates that the intervention has similar, and perhaps stronger, effects in an online learning environment with in-service K-12 teachers who want to become qualified to teach computing courses. By implementing this subgoal intervention as a tool for educators to teach themselves and their students, education systems could improve computing education and better prepare learners for an increasingly technical world.

  14. Can anything better come along? Reflections on the deep future of hydrogen-electricity systems

    International Nuclear Information System (INIS)

    Scott, D. S.

    2006-01-01

    Sometimes, for some things, we can project the deep future better than tomorrow. This is particularly relevant to our energy system where, if we focus on energy currencies, looking further out allows us to leap the tangles of today's conventional wisdom, vested mantras and ill-found hopes. We will first recall the rationale that sets out why - by the time the 22. century rolls around - hydrogen and electricity will have become civilizations staple energy currencies. Building on this dual-currency inevitability we'll then evoke the wisdom that, while we never know everything about the future we always know something. For future energy systems that 'something' is the role and nature of the energy currencies. From this understanding, our appreciation of the deep future can take shape - at least for infrastructures, energy sources and some imbedded technologies - but not service-delivery widgets. The long view provides more than mere entertainment. It should form the basis of strategies for today that, in turn, will avoid setbacks and blind alleys on our journey to tomorrow. Some people accept that hydrogen and electricity will be our future, but only 'until something better comes along.' The talk will conclude with logic that explains the response: 'No! Nothing better will ever come along.'. (authors)

  15. Can anything better come along? Reflections on the deep future of hydrogen-electricity systems

    International Nuclear Information System (INIS)

    Scott, D.S.

    2004-01-01

    'Full text:' Sometimes, for some things, we can project the deep future better than tomorrow. This is particularly relevant to our energy system where, if we focus on energy currencies, looking further out allows us to leap the tangles of today's conventional wisdom, vested mantras and ill-found hopes. We will first recall the rationale that sets out why - by the time the 22nd century rolls around - hydrogen and electricity will have become civilization's staple energy currencies. Building on this dual-currencies inevitability we'll then evoke the wisdom that, we never know everything about the future but we always know something. For future energy systems that 'something' is the role and nature of the energy currencies. From this understanding, our appreciation of the deep future can take shape - at least for infrastructures, energy sources and some imbedded technologies-but not service-delivery widgets. The long view provides more than mere entertainment. It should form the basis of strategies for today that, in turn, will avoid blind alleys on our journey to tomorrow. Some people accept that hydrogen and electricity will be our future, but only 'until something better comes along.' The talk will conclude with logic that explains the response: No, nothing better will ever come along. (author)

  16. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  17. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  18. On the need to better specify the concept of control in brain-computer-interfaces/neurofeedback research

    Directory of Open Access Journals (Sweden)

    Guilherme eWood

    2014-09-01

    Full Text Available Aiming at a better specification of the concept of control in brain-computer-interfaces (BCI and neurofeedback research, we propose to distinguish self-control of brain activity from the broader concept of BCI control, since the first describes a neurocognitive phenomenon and is only one of the many components of BCI control. Based on this distinction, we developed a framework based on dual-processes theory that describes the cognitive determinants of self-control of brain activity as the interplay of automatic vs. controlled information processing. Further, we distinguish between cognitive processes that are necessary and sufficient to achieve a given level of self-control of brain activity and those which are not. We discuss that those cognitive processes which are not necessary for the learning process can hamper self-control because they cannot be completely turned-off at any time. This framework aims at a comprehensive description of the cognitive determinants of the acquisition of self-control of brain activity underlying those classes of BCI which require the user to achieve regulation of brain activity as well as neurofeedback learning.

  19. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  20. Oral and maxillofacial surgery with computer-assisted navigation system.

    Science.gov (United States)

    Kawachi, Homare; Kawachi, Yasuyuki; Ikeda, Chihaya; Takagi, Ryo; Katakura, Akira; Shibahara, Takahiko

    2010-01-01

    Intraoperative computer-assisted navigation has gained acceptance in maxillofacial surgery with applications in an increasing number of indications. We adapted a commercially available wireless passive marker system which allows calibration and tracking of virtually every instrument in maxillofacial surgery. Virtual computer-generated anatomical structures are displayed intraoperatively in a semi-immersive head-up display. Continuous observation of the operating field facilitated by computer assistance enables surgical navigation in accordance with the physician's preoperative plans. This case report documents the potential for augmented visualization concepts in surgical resection of tumors in the oral and maxillofacial region. We report a case of T3N2bM0 carcinoma of the maxillary gingival which was surgically resected with the assistance of the Stryker Navigation Cart System. This system was found to be useful in assisting preoperative planning and intraoperative monitoring.

  1. COMPUTER-BASED REASONING SYSTEMS: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    CIPRIAN CUCU

    2012-12-01

    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  2. Distributed computer controls for accelerator systems

    Science.gov (United States)

    Moore, T. L.

    1989-04-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed.

  3. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1989-01-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. (orig.)

  4. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  5. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  6. DNA-Enabled Integrated Molecular Systems for Computation and Sensing

    Science.gov (United States)

    2014-05-21

    Computational devices can be chemically conjugated to different strands of DNA that are then self-assembled according to strict Watson − Crick binding rules... DNA -Enabled Integrated Molecular Systems for Computation and Sensing Craig LaBoda,† Heather Duschl,† and Chris L. Dwyer*,†,‡ †Department of...guided folding of DNA , inspired by nature, allows designs to manipulate molecular-scale processes unlike any other material system. Thus, DNA can be

  7. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal

    2017-01-01

    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.

  8. Computer system for environmental sample analysis and data storage and analysis

    International Nuclear Information System (INIS)

    Brauer, F.P.; Fager, J.E.

    1976-01-01

    A mini-computer based environmental sample analysis and data storage system has been developed. The system is used for analytical data acquisition, computation, storage of analytical results, and tabulation of selected or derived results for data analysis, interpretation and reporting. This paper discussed the structure, performance and applications of the system

  9. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  10. Computational Tools applied to Urban Engineering

    OpenAIRE

    Filho, Armando Carlos de Pina; Lima, Fernando Rodrigues; Amaral, Renato Dias Calado do

    2010-01-01

    This chapter looked for to present the main details on three technologies much used in Urban Engineering: CAD (Computer-Aided Design); GIS (Geographic Information System); and BIM (Building Information Modelling). As it can be seen, each one of them presents specific characteristics and with diverse applications in urban projects, providing better results in relation to the planning, management and maintenance of the systems. In relation to presented software, it is important to note that the...

  11. A computer-based spectrometry system for assessment of body radioactivity

    International Nuclear Information System (INIS)

    Venn, J.B.

    1985-01-01

    This paper describes a PDP-11 computer system operating under RT-11 for the acquisition and processing of pulse height spectra in the measurement of body radioactivity. SABRA (system for the assessment of body radioactivity) provides control of multiple detection systems from visual display consoles by means of a command language. A wide range of facilities is available for the display, processing and storage of acquired spectra and complex operations may be pre-programmed by means of the SABRE MACRO language. The hardware includes a CAMAC interface to the detection systems, disc cartridge drives for mass storage of data and programs, and data-links to other computers. The software is written in assembler language and includes special features for the dynamic allocation of computer memory and for safeguarding acquired data. (orig.)

  12. Overview of Risk Mitigation for Safety-Critical Computer-Based Systems

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.

  13. Computation of magnetic suspension of maglev systems using dynamic circuit theory

    Science.gov (United States)

    He, J. L.; Rote, D. M.; Coffey, H. T.

    1992-01-01

    Dynamic circuit theory is applied to several magnetic suspensions associated with maglev systems. These suspension systems are the loop-shaped coil guideway, the figure-eight-shaped null-flux coil guideway, and the continuous sheet guideway. Mathematical models, which can be used for the development of computer codes, are provided for each of these suspension systems. The differences and similarities of the models in using dynamic circuit theory are discussed in the paper. The paper emphasizes the transient and dynamic analysis and computer simulation of maglev systems. In general, the method discussed here can be applied to many electrodynamic suspension system design concepts. It is also suited for the computation of the performance of maglev propulsion systems. Numerical examples are presented in the paper.

  14. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths am clear of obstacles. This need for a task space model is most pronounced in the remediation of obsolete production facilities and underground storage tanks. Production facilities at many sites contain compact process machinery and systems that were used to produce weapons grade material. For many such systems, a complex maze of pipes (with potentially dangerous contents) must be removed, and this represents a significant D ampersand D challenge. In an analogous way, the underground storage tanks at sites such as Hanford represent a challenge because of their limited entry and the tumbled profusion of in-tank hardware. In response to this need, the Interactive Computer-Enhanced Remote Viewing System (ICERVS) is being designed as a software system to: (1) Provide a reliable geometric description of a robotic task space, and (2) Enable robotic remediation to be conducted more effectively and more economically than with available techniques. A system such as ICERVS is needed because of the problems discussed below

  15. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen

    2014-01-01

    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  16. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  17. Computational transport phenomena of fluid-particle systems

    CERN Document Server

    Arastoopour, Hamid; Abbasi, Emad

    2017-01-01

    This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...

  18. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F

    2014-01-01

    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  19. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  20. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information