WorldWideScience

Sample records for operating systems computer

  1. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  2. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  3. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  4. Optimization of Operating Systems towards Green Computing

    Directory of Open Access Journals (Sweden)

    Appasami Govindasamy

    2011-01-01

    Full Text Available Green Computing is one of the emerging computing technology in the field of computer science engineering and technology to provide Green Information Technology (Green IT. It is mainly used to protect environment, optimize energy consumption and keeps green environment. Green computing also refers to environmentally sustainable computing. In recent years, companies in the computer industry have come to realize that going green is in their best interest, both in terms of public relations and reduced costs. Information and communication technology (ICT has now become an important department for the success of any organization. Making IT “Green” can not only save money but help save our world by making it a better place through reducing and/or eliminating wasteful practices. In this paper we focus on green computing by optimizing operating systems and scheduling of hardware resources. The objectives of the green computing are human power, electrical energy, time and cost reduction with out polluting the environment while developing the software. Operating System (OS Optimization is very important for Green computing, because it is bridge for both hardware components and Application Soft wares. The important Steps for green computing user and energy efficient usage are also discussed in this paper.

  5. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  6. Computer system for monitoring power boiler operation

    Energy Technology Data Exchange (ETDEWEB)

    Taler, J.; Weglowski, B.; Zima, W.; Duda, P.; Gradziel, S.; Sobota, T.; Cebula, A.; Taler, D. [Cracow University of Technology, Krakow (Poland). Inst. for Process & Power Engineering

    2008-02-15

    The computer-based boiler performance monitoring system was developed to perform thermal-hydraulic computations of the boiler working parameters in an on-line mode. Measurements of temperatures, heat flux, pressures, mass flowrates, and gas analysis data were used to perform the heat transfer analysis in the evaporator, furnace, and convection pass. A new construction technique of heat flux tubes for determining heat flux absorbed by membrane water-walls is also presented. The current paper presents the results of heat flux measurement in coal-fired steam boilers. During changes of the boiler load, the necessary natural water circulation cannot be exceeded. A rapid increase of pressure may cause fading of the boiling process in water-wall tubes, whereas a rapid decrease of pressure leads to water boiling in all elements of the boiler's evaporator - water-wall tubes and downcomers. Both cases can cause flow stagnation in the water circulation leading to pipe cracking. Two flowmeters were assembled on central downcomers, and an investigation of natural water circulation in an OP-210 boiler was carried out. On the basis of these measurements, the maximum rates of pressure change in the boiler evaporator were determined. The on-line computation of the conditions in the combustion chamber allows for real-time determination of the heat flowrate transferred to the power boiler evaporator. Furthermore, with a quantitative indication of surface cleanliness, selective sootblowing can be directed at specific problem areas. A boiler monitoring system is also incorporated to provide details of changes in boiler efficiency and operating conditions following sootblowing, so that the effects of a particular sootblowing sequence can be analysed and optimized at a later stage.

  7. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  8. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  9. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  10. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  11. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  12. Bringing the CMS distributed computing system into scalable operations

    CERN Document Server

    Belforte, S; Fisk, I; Flix, J; Hernández, J M; Kress, T; Letts, J; Magini, N; Miccio, V; Sciabà, A

    2010-01-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure an...

  13. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  14. An Operational Computational Terminal Area PBL Prediction System

    Science.gov (United States)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1998-01-01

    There are two fundamental goals of this research project which are listed here in terms of priority, i.e., a primary and secondary goal. The first and primary goal is to develop a prognostic system which could satisfy the operational weather prediction requirements of the meteorological subsystem within the Aircraft Vortex Spacing System (AVOSS), i.e., an operational computational Terminal Area PBL Prediction System (TAPPS). The second goal is to perform indepth diagnostic analyses of the meteorological conditions during the special wake vortex deployments at Memphis and Dallas during August 95 and September 97, respectively. These two goals are interdependent because a thorough understanding of the atmospheric dynamical processes which produced the unique meteorology during the Memphis and Dallas deployments will help us design a prognostic system for the planetary boundary layer (PBL) which could be utilized to support the meteorological subsystem within AVOSS. Concerning the primary goal, TAPPS Stage 2 was tested on the Memphis data and is about to be tested on the Dallas case studies. Furthermore benchmark tests have been undertaken to select the appropriate platform to run TAPPS in real time in support of the DFW AVOSS system. In addition, a technique to improve the initial data over the region surrounding Dallas was also tested and modified for potential operational use in TAPPS. The secondary goal involved several sensitivity simulations and comparisons to Memphis observational data sets in an effort to diagnose what specific atmospheric phenomena where occurring which may have impacted the dynamics of atmospheric wake vortices.

  15. A Computer-Mediated Instruction System, Applied to Its Own Operating System and Peripheral Equipment.

    Science.gov (United States)

    Winiecki, Roger D.

    Each semester students in the School of Health Sciences of Hunter College learn how to use a computer, how a computer system operates, and how peripheral equipment can be used. To overcome inadequate computer center services and equipment, programed subject matter and accompanying reference material were developed. The instructional system has a…

  16. Operational characteristics optimization of human-computer system

    OpenAIRE

    Zulquernain Mallick; Irfan Anjum Badruddin magami; Khaleed Hussain Tandur

    2010-01-01

    Computer operational parameters are having vital influence on the operators efficiency from readability viewpoint. Four parameters namely font, text/background color, viewing angle and viewing distance are analyzed. The text reading task, in the form of English text, was presented on the computer screen to the participating subjects and their performance, measured in terms of number of words read per minute (NWRPM), was recorded. For the purpose of optimization, the Taguchi method is u...

  17. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis ARM

    OpenAIRE

    Wiedjaja; Handi Muljoredjo; Jonathan Lukas; Benyamin Christian; Luis Kristofel

    2014-01-01

    Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system) which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC) ARM-based, namely Pandaboard ES with ...

  18. Computational Efficiency of Economic MPC for Power Systems Operation

    DEFF Research Database (Denmark)

    Standardi, Laura; Poulsen, Niels Kjølstad; Jørgensen, John Bagterp

    2013-01-01

    In this work, we propose an Economic Model Predictive Control (MPC) strategy to operate power systems that consist of independent power units. The controller balances the power supply and demand, minimizing production costs. The control problem is formulated as a linear program that is solved...

  19. Operational characteristics optimization of human-computer system

    Directory of Open Access Journals (Sweden)

    Zulquernain Mallick

    2010-09-01

    Full Text Available Computer operational parameters are having vital influence on the operators efficiency from readability viewpoint. Four parameters namely font, text/background color, viewing angle and viewing distance are analyzed. The text reading task, in the form of English text, was presented on the computer screen to the participating subjects and their performance, measured in terms of number of words read per minute (NWRPM, was recorded. For the purpose of optimization, the Taguchi method is used to find the optimal parameters to maximize operators’ efficiency for performing readability task. Two levels of each parameter have been considered in this study. An orthogonal array, the signal-to-noise (S/N ratio and the analysis of variance (ANOVA were employed to investigate the operators’ performance/efficiency. Results showed that Times Roman font, black text on white background, 40 degree viewing angle and 60 cm viewing distance, the subjects were quite comfortable, efficient and read maximum number of words per minute. Text/background color was dominant parameter with a percentage contribution of 76.18% towards the laid down objective followed by font type at 18.17%, viewing distance 7.04% and viewing angle 0.58%. Experimental results are provided to confirm the effectiveness of this approach.

  20. Developing an ARM based GNU/Linux Operating System for Single Board Computer – Cubietruck

    OpenAIRE

    S.Pravin Kumar; Pradeep, G; G.Nantha Kumar; C.Dhivya Devi

    2014-01-01

    The design and implementation of a Monolithic-Kernel Single Board Computer (SBC) - Cubietruck GNU/Linux-like operating system on ARM platform in technical details, including boot loader design - UBOOT, building the Kernel - uImage, design of root file system and init process. The Single Board Computer Operating System (SBC OS) is developed on Linux platform with GNU tool chain. The SBC OS can be used for both SBC system application development and related curriculum teaching. Single Board ...

  1. Computing with Logic as Operator Elimination: The ToyElim System

    CERN Document Server

    Wernhard, Christoph

    2011-01-01

    A prototype system is described whose core functionality is, based on propositional logic, the elimination of second-order operators, such as Boolean quantifiers and operators for projection, forgetting and circumscription. This approach allows to express many representational and computational tasks in knowledge representation - for example computation of abductive explanations and models with respect to logic programming semantics - in a uniform operational system, backed by a uniform classical semantic framework.

  2. Medical computing in the 1980s. Operating system and programming language issues.

    Science.gov (United States)

    Greenes, R A

    1983-06-01

    Operating systems and programming languages differ widely in their suitability for particular applications. The diversity of medical computing needs demands a diversity of solutions. Compounding this diversity if the decentralization caused by evolution of local computing systems for local needs. Relevant current trends in computing include increased emphasis on decentralization, growing capabilities for interconnection of diverse systems, and development of common data base and file server capabilities. In addition, standardization and hardware in dependence of operating systems, as well as programming languages and development of programmerless systems, continue to widen application opportunities.

  3. Developing an ARM based GNU/Linux Operating System for Single Board Computer – Cubietruck

    Directory of Open Access Journals (Sweden)

    S.Pravin Kumar

    2014-12-01

    Full Text Available The design and implementation of a Monolithic-Kernel Single Board Computer (SBC - Cubietruck GNU/Linux-like operating system on ARM platform in technical details, including boot loader design - UBOOT, building the Kernel - uImage, design of root file system and init process. The Single Board Computer Operating System (SBC OS is developed on Linux platform with GNU tool chain. The SBC OS can be used for both SBC system application development and related curriculum teaching. Single Board Computer like embedded system related curriculums are already become necessary components for undergraduate computer majors. The system is mainly designed for the purpose of technical research and curriculum based teaching and students to learn, study and more readable, of which the source codes can be provided to students, guiding them to design tiny operating system on ARM platform from scratch.

  4. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    Directory of Open Access Journals (Sweden)

    Degtyarev Alexander

    2016-01-01

    Full Text Available The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a realization of tensor representations of numerical schemes for direct simulation; b realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile; c computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  5. Computer controlled operation of a two-engine xenon ion propulsion system

    Science.gov (United States)

    Brophy, John R.

    1987-01-01

    The development and testing of a computer control system for a two-engine xenon ion propulsion module is described. The computer system controls all aspects of the propulsion module operation including: start-up, steady-state operation, throttling and shutdown of the engines; start-up, operation and shutdown of the central neutralizer subsystem; control of the gimbal system for each engine; and operation of the valves in the propellant storage and distribution system. The most important engine control algorithms are described in detail. These control algorithms provide flexibility in the operation and throttling of ion engines which has never before been possible. This flexibility is made possible in large part through the use of flow controllers which maintain the total flow rate of propellant into the engine at the proper level. Data demonstrating the throttle capabilities of the engine and control system are presented.

  6. Modelled operation of the Shetlands Islands power system comparing computational and human operators` load forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Hill, D.C. [University Coll. of North Wales, Menai Bridge (United Kingdom). School of Ocean Science; Infield, D.G. [Loughborough Univ. of Technology (United Kingdom). Dept. of Electronic and Electrical Engineering

    1995-11-01

    A load forecasting technique, based upon an autoregressive (AR) method is presented. Its use for short term load forecasting is assessed by direct comparison with real forecasts made by human operators of the Lerwick power station on the Shetland Islands. A substantial improvement in load prediction, as measured by a reduction of RMS error, is demonstrated. Shetland has a total installed capacity of about 68 MW, and an average load (1990) of around 20 MW. Although the operators could forecast the load for a few distinct hours better than the AR method, results from simulations of the scheduling and operation of the generating plant show that the AR forecasts provide increased overall system performance. A detailed model of the island power system, which includes plant scheduling, was run using the AR and Lerwick operators` forecasts as input to the scheduling routine. A reduction in plant cycling, underloading and fuel consumption was obtained using the AR forecasts rather than the operators` forecasts in simulations over a 28 day study period. It is concluded that the load forecasting method presented could be of benefit to the operators of such mesoscale power systems. (author)

  7. Computer algebra and operators

    Science.gov (United States)

    Fateman, Richard; Grossman, Robert

    1989-01-01

    The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.

  8. Building a computer-aided design capability using a standard time share operating system

    Science.gov (United States)

    Sobieszczanski, J.

    1975-01-01

    The paper describes how an integrated system of engineering computer programs can be built using a standard commercially available operating system. The discussion opens with an outline of the auxiliary functions that an operating system can perform for a team of engineers involved in a large and complex task. An example of a specific integrated system is provided to explain how the standard operating system features can be used to organize the programs into a simple and inexpensive but effective system. Applications to an aircraft structural design study are discussed to illustrate the use of an integrated system as a flexible and efficient engineering tool. The discussion concludes with an engineer's assessment of an operating system's capabilities and desirable improvements.

  9. The Relationship between Chief Information Officer Transformational Leadership and Computing Platform Operating Systems

    Science.gov (United States)

    Anderson, George W.

    2010-01-01

    The purpose of this study was to relate the strength of Chief Information Officer (CIO) transformational leadership behaviors to 1 of 5 computing platform operating systems (OSs) that may be selected for a firm's Enterprise Resource Planning (ERP) business system. Research shows executive leader behaviors may promote innovation through the use of…

  10. DETERMINATION OF OPERATING FIELDS OF TOLERANCES OF HYDRAULIC SYSTEMS PARAMETERS FOR AIRCRAFT BOARD COMPUTER COMPLEX

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available To determine the operating fields of the tolerances of hydraulic systems parameters for various conditions of work and phases of flight given mathematical relationships and the results obtained in Mathcad in analytical form for the board computer system.

  11. The Relationship between Chief Information Officer Transformational Leadership and Computing Platform Operating Systems

    Science.gov (United States)

    Anderson, George W.

    2010-01-01

    The purpose of this study was to relate the strength of Chief Information Officer (CIO) transformational leadership behaviors to 1 of 5 computing platform operating systems (OSs) that may be selected for a firm's Enterprise Resource Planning (ERP) business system. Research shows executive leader behaviors may promote innovation through the use of…

  12. Kajian dan Implementasi Real Time Operating System pada Single Board Computer Berbasis ARM

    Directory of Open Access Journals (Sweden)

    Wiedjaja

    2014-06-01

    Full Text Available Operating System is an important software in computer system. For personal and office use the operating system is sufficient. However, to critical mission applications such as nuclear power plants and braking system on the car (auto braking system which need a high level of reliability, it requires operating system which operates in real time. The study aims to assess the implementation of the Linux-based operating system on a Single Board Computer (SBC ARM-based, namely Pandaboard ES with the Dual-core ARM Cortex-A9, TI OMAP 4460 type. Research was conducted by the method of implementation of the General Purpose OS Ubuntu 12:04 OMAP4-armhf-RTOS and Linux 3.4.0-rt17 + on PandaBoard ES. Then research compared the latency value of each OS on no-load and with full-load condition. The results obtained show the maximum latency value of RTOS on full load condition is at 45 uS, much smaller than the maximum value of GPOS at full-load at 17.712 uS. The lower value of latency demontrates that the RTOS has ability to run the process in a certain period of time much better than the GPOS.

  13. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    Science.gov (United States)

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Effects of Computer Reservation System in the Operations of Travel Agencies

    Directory of Open Access Journals (Sweden)

    Sevillia S. Felicen

    2016-11-01

    Full Text Available In travel industry, the main tool used is the computerized booking systems and now known as Global Distribution Systems or GDS. This paper aimed to determine the effect of using Computer Reservation System among Travel Agencies in terms of technical, human and financial aspect. This will help the Internship office to include the identified travel agencies in their linkages where the students will be deployed for internship. The result of this study will also be helpful and can be utilized in the course travel and tour operations with computer reservation system. The descriptive method of research was used with managers and users/staff of 20 travel agencies as participants of the study. Questionnaire was used as main data gathering instrument utilizing percentage, frequency and weighted mean as statistical tool. Abacus System is the computer reservation system used by all travel agencies in Batangas. All travel agencies offered services such as domestic and international hotel reservation, domestic and international ticketing and package tour. The CRS can connect guest to all forms of travel; it has installed built in system security features that can improve agency’s efficiency and productivity.

  15. Aspects of operating systems and software engineering with parallel computer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Foessmeier, R.; Ruede, U.; Zenger, C.

    1988-05-01

    Making efficient use of parallel computer architectures generally requires special programming techniques. Usually, non-standardized parallel constructs are added to a traditional programming language. This reduces program portability and adds extra difficulties to programming. Coarse-grain parallelism can be exploited by parallel processes. In this field the operating system UNIX - now in widespread use - offers easy-to-use means for describing parallelism, sufficient for basic process synchronisation and communication. Problem structurization required for this kind of parallelism often contributes to the versatility and clarity of the programs. As an example, the elimination of a linear system is parallelized.

  16. Operating systems

    CERN Document Server

    Tsichritzis, Dionysios C; Rheinboldt, Werner

    1974-01-01

    Operating Systems deals with the fundamental concepts and principles that govern the behavior of operating systems. Many issues regarding the structure of operating systems, including the problems of managing processes, processors, and memory, are examined. Various aspects of operating systems are also discussed, from input-output and files to security, protection, reliability, design methods, performance evaluation, and implementation methods.Comprised of 10 chapters, this volume begins with an overview of what constitutes an operating system, followed by a discussion on the definition and pr

  17. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L.; Camp, William J.

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  18. Development of computer program for simulation of an ice bank system operation, Part I: Mathematical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Halasz, Boris; Grozdek, Marino; Soldo, Vladimir [Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Ivana Lucica 5, 10 000 Zagreb (Croatia)

    2009-09-15

    Since the use of standard engineering methods in the process of an ice bank performance evaluation offers neither adequate flexibility nor accuracy, the aim of this research was to provide a powerful tool for an industrial design of an ice storage system allowing to account for the various design parameters and system arrangements over a wide range of time varying operating conditions. In this paper the development of a computer application for the prediction of an ice bank system operation is presented. Static, indirect, cool thermal storage systems with external ice on coil building/melting were considered. The mathematical model was developed by means of energy and mass balance relations for each component of the system and is basically divided into two parts, the model of an ice storage system and the model of a refrigeration unit. Heat transfer processes in an ice silo were modelled by use of empirical correlations while the performance of refrigeration unit components were based on manufacturers data. Programming and application design were made in Fortran 95 language standard. Input of data is enabled through drop down menus and dialog boxes, while the results are presented via figures, diagrams and data (ASCII) files. In addition, to demonstrate the necessity for development of simulation program a case study was performed. Simulation results clearly indicate that no simple engineering methods or rule of thumb principles could be utilised in order to validate performance of an ice bank system properly. (author)

  19. Computer applications in railway operation

    Directory of Open Access Journals (Sweden)

    Mohamed Hafez Fahmy Aly

    2016-06-01

    Full Text Available One of the main goals of the railway simulation technique is the formation of a model that can be easily tested for any desired changes and modifications in infrastructure, control system, or in train operations in order to improve the network operation and its productivity. RailSys3.0 is a German railway simulation program that deals with this goal. In this paper, a railway network operation, with different suggested modifications in infrastructure, rolling stocks, and control system, using RailSys3.0, has been studied, optimized, and evaluated. The proposed simulation program (RailSys 3.0 was applied on ABO-KIR railway line in Alexandria city, as a case study, to assess the impact of changing track configuration, operating and control systems on the performance measures, time-table, track capacity and productivity. Simulation input, such as track element, train and operation components of the ABO-KIR railway line, has been entered to the computer program to construct the simulation model. The simulation process has been carried out for the existing operation system to construct a graphical model of the case-study track including line alignment and train movements, as well as to evaluate the existing operation system. To improve the operation system of the railway line, eight different innovative alternatives are generated, analyzed and evaluated. Finally, different track measures to improve the operation system of the ABO-KIR railway line have been introduced.

  20. Operating System Security

    CERN Document Server

    Jaeger, Trent

    2008-01-01

    Operating systems provide the fundamental mechanisms for securing computer processing. Since the 1960s, operating systems designers have explored how to build "secure" operating systems - operating systems whose mechanisms protect the system against a motivated adversary. Recently, the importance of ensuring such security has become a mainstream issue for all operating systems. In this book, we examine past research that outlines the requirements for a secure operating system and research that implements example systems that aim for such requirements. For system designs that aimed to

  1. Intraoperative computed tomography with integrated navigation system in a multidisciplinary operating suite.

    Science.gov (United States)

    Uhl, Eberhard; Zausinger, Stefan; Morhard, Dominik; Heigl, Thomas; Scheder, Benjamin; Rachinger, Walter; Schichor, Christian; Tonn, Jörg-Christian

    2009-05-01

    We report our preliminary experience in a prospective series of patients with regard to feasibility, work flow, and image quality using a multislice computed tomographic (CT) scanner combined with a frameless neuronavigation system (NNS). A sliding gantry 40-slice CT scanner was installed in a preexisting operating room. The scanner was connected to a frameless infrared-based NNS. Image data was transferred directly from the scanner into the navigation system. This allowed updating of the NNS during surgery by automated image registration based on the position of the gantry. Intraoperative CT angiography was possible. The patient was positioned on a radiolucent operating table that fits within the bore of the gantry. During image acquisition, the gantry moved over the patient. This table allowed all positions and movements like any normal operating table without compromising the positioning of the patient. For cranial surgery, a carbon-made radiolucent head clamp was fixed to the table. Experience with the first 230 patients confirms the feasibility of intraoperative CT scanning (136 patients with intracranial pathology, 94 patients with spinal lesions). After a specific work flow, interruption of surgery for intraoperative scanning can be limited to 10 to 15 minutes in cranial surgery and to 9 minutes in spinal surgery. Intraoperative imaging changed the course of surgery in 16 of the 230 cases either because control CT scans showed suboptimal screw position (17 of 307 screws, with 9 in 7 patients requiring correction) or that tumor resection was insufficient (9 cases). Intraoperative CT angiography has been performed in 7 cases so far with good image quality to determine residual flow in an aneurysm. Image quality was excellent in spinal and cranial base surgery. The system can be installed in a preexisting operating environment without the need for special surgical instruments. It increases the safety of the patient and the surgeon without necessitating a change

  2. Implementation of a Real-Time, Distributed Operating System for a Multiple Computer System.

    Science.gov (United States)

    1982-06-01

    Segmentation Registers ------ 31 4. The iSBC8O86/12A( Single Board Computer ) ------ 35 C. PHYSICAL ADDRESS GENERATION ---------------------- 36 1. General...iSBC86/12A single board computer (SBC). It is based on the INTEL 8086 16 bit micro-processor. Detailed descriptions of all the components of the SBC and...space. The extra segment register (ES) is typically used for external or shared data, and data storage. 4. The iSBC86/12A( Single Board Computer ) The

  3. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    Science.gov (United States)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  4. A state-of-the-art report on software operation structure of the digital control computer system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bong Kee; Lee, Kyung Hoh; Joo, Jae Yoon; Jang, Yung Woo; Shin, Hyun Kook [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    CANDU Nuclear Power Plants including Wolsong 1 and 2/3/4 are controlled by a real-time plant control computer system. This report was written to provide an overview on the station control computer software which belongs to one of the most advanced real-time computing application area, along with the Fuel Handling Machine design concepts. The combination of well designed control computer and Fuel Handling Machine allow changing fuel bundles while the plant is in operation. Design methodologies and software structure are discussed along with the interface between the two systems. 29 figs., 2 tabs., 20 refs. (Author).

  5. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    Science.gov (United States)

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  6. YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning

    Science.gov (United States)

    Mustafa, Besim

    2013-01-01

    A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…

  7. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    Science.gov (United States)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  8. Grid connected integrated community energy system. Phase II: final state 2 report. Cost benefit analysis, operating costs and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    1978-03-22

    A grid-connected Integrated Community Energy System (ICES) with a coal-burning power plant located on the University of Minnesota campus is planned. The cost benefit analysis performed for this ICES, the cost accounting methods used, and a computer simulation of the operation of the power plant are described. (LCL)

  9. Towards an integral computer environment supporting system operations analysis and conceptual design

    Science.gov (United States)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1994-01-01

    VITROCISET has in house developed a prototype tool named System Dynamic Analysis Environment (SDAE) to support system engineering activities in the initial definition phase of a complex space system. The SDAE goal is to provide powerful means for the definition, analysis, and trade-off of operations and design concepts for the space and ground elements involved in a mission. For this purpose SDAE implements a dedicated modeling methodology based on the integration of different modern (static and dynamic) analysis and simulation techniques. The resulting 'system model' is capable of representing all the operational, functional, and behavioral aspects of the system elements which are part of a mission. The execution of customized model simulations enables: the validation of selected concepts with respect to mission requirements; the in-depth investigation of mission specific operational and/or architectural aspects; and the early assessment of performances required by the system elements to cope with mission constraints and objectives. Due to its characteristics, SDAE is particularly tailored for nonconventional or highly complex systems, which require a great analysis effort in their early definition stages. SDAE runs under PC-Windows and is currently used by VITROCISET system engineering group. This paper describes the SDAE main features, showing some tool output examples.

  10. Computer systems

    Science.gov (United States)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  11. Modern operating systems

    CERN Document Server

    Tanenbaum, Andrew S

    2015-01-01

    Modern Operating Systems, Fourth Edition, is intended for introductory courses in Operating Systems in Computer Science, Computer Engineering, and Electrical Engineering programs. It also serves as a useful reference for OS professionals ' The widely anticipated revision of this worldwide best-seller incorporates the latest developments in operating systems (OS) technologies. The Fourth Edition includes up-to-date materials on relevant'OS. Tanenbaum also provides information on current research based on his experience as an operating systems researcher. ' Modern Operating Systems, Third Editionwas the recipient of the 2010 McGuffey Longevity Award. The McGuffey Longevity Award recognizes textbooks whose excellence has been demonstrated over time.'http://taaonline.net/index.html " Teaching and Learning Experience This program will provide a better teaching and learning experience-for you and your students. It will help: ' *Provide Practical Detail on the Big Picture Concepts: A clear and entertaining writing s...

  12. LHCb: LHCb Distributed Computing Operations

    CERN Multimedia

    Stagni, F

    2011-01-01

    The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and independent sources of information offering different views for a better understanding of problems while an operations team and defined procedures have been put in place to handle them. This work summarizes the state-of-the-art of LHCb Grid operations emphasizing the reasons that brought to various choices and what are the tools currently in use to run our daily activities. We highlight the most common problems experienced across years of activities on the WLCG infrastructure, the services with their criticality, the procedures in place, the relevant metrics and the tools available and the ones still missing.

  13. Implications of Using Computer-Based Training on System Readiness and Operating & Support Costs

    Science.gov (United States)

    2014-07-18

    Policy - 8 - Naval Postgraduate School Program Executive Office Integrated warfare System 5 ( PEO IWS5) provided a list of ships equipped with the AN/SQQ...on board both before and after implementation of CBT were considered. The initial list provided by PEO IWS5 included all ships of the CG-47, DD-963...Ownership Cost (TOC) Guidebook. Dhanjal, R., & Calis, G. (1999). Computer Based Training in the Steel Industry. Steel Times Vol. 227 No. 4, 130-131

  14. Using the transportable, computer-operated, liquid-scintillator fast-neutron spectrometer system

    Energy Technology Data Exchange (ETDEWEB)

    Thorngate, J.H.

    1988-11-01

    When a detailed energy spectrum is needed for radiation-protection measurements from approximately 1 MeV up to several tens of MeV, organic-liquid scintillators make good neutron spectrometers. However, such a spectrometer requires a sophisticated electronics system and a computer to reduce the spectrum from the recorded data. Recently, we added a Nuclear Instrument Module (NIM) multichannel analyzer and a lap-top computer to the NIM electronics we have used for several years. The result is a transportable fast-neutron spectrometer system. The computer was programmed to guide the user through setting up the system, calibrating the spectrometer, measuring the spectrum, and reducing the data. Measurements can be made over three energy ranges, 0.6--2 MeV, 1.1--8 MeV, or 1.6--16 MeV, with the spectrum presented in 0.1-MeV increments. Results can be stored on a disk, presented in a table, and shown in graphical form. 5 refs., 51 figs.

  15. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  16. Operations management system

    Science.gov (United States)

    Brandli, A. E.; Eckelkamp, R. E.; Kelly, C. M.; Mccandless, W.; Rue, D. L.

    1990-01-01

    The objective of an operations management system is to provide an orderly and efficient method to operate and maintain aerospace vehicles. Concepts are described for an operations management system and the key technologies are highlighted which will be required if this capability is brought to fruition. Without this automation and decision aiding capability, the growing complexity of avionics will result in an unmanageable workload for the operator, ultimately threatening mission success or survivability of the aircraft or space system. The key technologies include expert system application to operational tasks such as replanning, equipment diagnostics and checkout, global system management, and advanced man machine interfaces. The economical development of operations management systems, which are largely software, will require advancements in other technological areas such as software engineering and computer hardware.

  17. Computer controlled antenna system

    Science.gov (United States)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  18. A VIRTUAL OPERATING SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Dennis E.; Scherrer, Deborah K.; Sventek, Joseph S.

    1980-05-01

    Significant progress toward disentangling computing environments from their under lying operating systern has been made. An approach is presented that achieves inter-system uniformity at all three levels of user interface - virtual machine, utilities, and command language. Under specifiable conditions, complete uniformity is achievable without disturbing the underlying operating system. The approach permits accurate computation of the cost to move both people and software to a new system. The cost of moving people is zero, and the cost of moving software is equal to the cost of implementing a virtual machine. Efficiency is achieved through optimization of the primitive functions.

  19. Sequential operators in computability logic

    CERN Document Server

    Japaridze, Giorgi

    2007-01-01

    Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a semantical platform and research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for (interactive) computational problems, understood as games between a machine and its environment; logical operators represent operations on such entities; and "truth" is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. The formalism of CL is open-ended, and may undergo series of extensions as the study of the subject advances. The main groups of operators on which CL has been focused so far are the parallel, choice, branching, and blind operators. The present paper introduces a new important group of operators, called sequential. The latter come in the form of sequential conjunction and disjunction, sequential quantifiers, and sequential recurrences. As the name may suggest, the algorithmic ...

  20. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  1. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    Science.gov (United States)

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  2. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  3. Laser performance operations model (LPOM): The computational system that automates the setup and performance analysis of the National Ignition Facility

    Science.gov (United States)

    Shaw, Michael; House, Ronald

    2015-02-01

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF is the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed that automates the laser setup process, and accurately predict laser energetics. LPOM uses diagnostic feedback from previous NIF shots to maintain accurate energetics models (gains and losses), as well as links to operational databases to provide `as currently installed' optical layouts for each of the 192 NIF beamlines. LPOM deploys a fully integrated laser physics model, the Virtual Beamline (VBL), in its predictive calculations in order to meet the accuracy requirements of NIF experiments, and to provide the ability to determine the damage risk to optical elements throughout the laser chain. LPOM determines the settings of the injection laser system required to achieve the desired laser output, provides equipment protection, and determines the diagnostic setup. Additionally, LPOM provides real-time post shot data analysis and reporting for each NIF shot. The LPOM computation system is designed as a multi-host computational cluster (with 200 compute nodes, providing the capability to run full NIF simulations fully parallel) to meet the demands of both the controls systems within a shot cycle, and the NIF user community outside of a shot cycle.

  4. Spectral computations for bounded operators

    CERN Document Server

    Ahues, Mario; Limaye, Balmohan

    2001-01-01

    Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refineme...

  5. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  6. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  7. Description and theory of operation of the computer by-pass system for the NASA F-8 digital fly-by-wire control system

    Science.gov (United States)

    1978-01-01

    A triplex digital flight control system was installed in a NASA F-8C airplane to provide fail operate, full authority control. The triplex digital computers and interface circuitry process the pilot commands and aircraft motion feedback parameters according to the selected control laws, and they output the surface commands as an analog signal to the servoelectronics for position control of the aircraft's power actuators. The system and theory of operation of the computer by pass and servoelectronics are described and an automated ground test for each axis is included.

  8. Computer science and operations research

    CERN Document Server

    Balci, Osman

    1992-01-01

    The interface of Operation Research and Computer Science - although elusive to a precise definition - has been a fertile area of both methodological and applied research. The papers in this book, written by experts in their respective fields, convey the current state-of-the-art in this interface across a broad spectrum of research domains which include optimization techniques, linear programming, interior point algorithms, networks, computer graphics in operations research, parallel algorithms and implementations, planning and scheduling, genetic algorithms, heuristic search techniques and dat

  9. ALMA correlator computer systems

    Science.gov (United States)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  10. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  11. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  12. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  13. Pyrolaser Operating System

    Science.gov (United States)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  14. Pyrolaser Operating System

    Science.gov (United States)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  15. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  16. [Hardware-software system for monitoring parameters and characteristics of X-ray computer tomographs under operation conditions].

    Science.gov (United States)

    Blinov, N N; Zelikman, M I; Kruchinin, S A

    2007-01-01

    The results of testing of hardware and software for monitoring parameters (mean number of CT units, noise, field uniformity, high-contrast spatial resolution, layer width, dose) and characteristics (modulation transfer function) of X-ray computer tomographs are presented. The developed hardware and software are used to monitor the stability of X-ray computer tomograph parameters under operation conditions.

  17. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    Science.gov (United States)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  18. Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks.

    Science.gov (United States)

    Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark

    2010-05-18

    The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.

  19. Thermal analysis of blast furnace stove regenerators system operating under staggered cold-bypass arrangement by computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Razelos, Panagiotis; Das, Stayprakash [College of Staten Island, CUNY, New York (United States); Krikkis, Rizos N. [Institute of Chemical Engineering and High Temperature Chemical Processes, Patras (Greece)

    2003-11-01

    A system of blast furnace stove regenerators operating under the ''staggered with cold by-pass'' arrangement is analyzed. The solution of the partial differential equations, which govern this non-linear heat transfer process, via computer simulation, is presented. The temperature dependence of the solid and fluid properties and heat transfer coefficients is taken into account. An example demonstrates the usefulness of the method, and the comparison with two other stove arrangements, the ''series parallel'', and ''staggered'', presented earlier, are discussed. It is shown, how this method can be utilized to aid the stove design, or to improve the performance of existing installations. (orig.)

  20. Dawning—1000 PROOS Distributed Operating System

    Institute of Scientific and Technical Information of China (English)

    孙凝晖; 刘文卓; 等

    1997-01-01

    PROOS is a distributed operating system running on the computing nodes of massively parallel processing computer Dawning-1000.It is an efficient and easily extendible micro kernel operating system.It supports the Intel NX message passing interface for communication.

  1. An operational system for subject switching between controlled vocabularies: A computational linguistics approach

    Science.gov (United States)

    Silvester, J. P.; Newton, R.; Klingbiel, P. H.

    1984-01-01

    The NASA Lexical Dictionary (NLD), a system that automatically translates input subject terms to those of NASA, was developed in four phases. Phase One provided Phrase Matching, a context sensitive word-matching process that matches input phrase words with any NASA Thesaurus posting (i.e., index) term or Use reference. Other Use references have been added to enable the matching of synonyms, variant spellings, and some words with the same root. Phase Two provided the capability of translating any individual DTIC term to one or more NASA terms having the same meaning. Phase Three provided NASA terms having equivalent concepts for two or more DTIC terms, i.e., coordinations of DTIC terms. Phase Four was concerned with indexer feedback and maintenance. Although the original NLD construction involved much manual data entry, ways were found to automate nearly all but the intellectual decision-making processes. In addition to finding improved ways to construct a lexical dictionary, applications for the NLD have been found and are being developed.

  2. Computational Linguistics in Military Operations

    Science.gov (United States)

    2010-01-01

    Documentation Exploitation (DOCEX) system is able to distill useful intelligence from multilingual sources eight to ten times faster than traditional...examine and analyze all multilingual speech and text that is available in the information space; allow any user—be it primarily an operational and...personnel and monolingual English-speaking analysts in response to direct or implicit requests.24 Foreign-to-English Translation Goals for foreign

  3. Broadcasting collective operation contributions throughout a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  4. Computer simulation of thermal plant operations

    CERN Document Server

    O'Kelly, Peter

    2012-01-01

    This book describes thermal plant simulation, that is, dynamic simulation of plants which produce, exchange and otherwise utilize heat as their working medium. Directed at chemical, mechanical and control engineers involved with operations, control and optimization and operator training, the book gives the mathematical formulation and use of simulation models of the equipment and systems typically found in these industries. The author has adopted a fundamental approach to the subject. The initial chapters provide an overview of simulation concepts and describe a suitable computer environment.

  5. Computer control for remote wind turbine operation

    Energy Technology Data Exchange (ETDEWEB)

    Manwell, J.F.; Rogers, A.L.; Abdulwahid, U.; Driscoll, J. [Univ. of Massachusetts, Amherst, MA (United States)

    1997-12-31

    Light weight wind turbines located in harsh, remote sites require particularly capable controllers. Based on extensive operation of the original ESI-807 moved to such a location, a much more sophisticated controller than the original one has been developed. This paper describes the design, development and testing of that new controller. The complete control and monitoring system consists of sensor and control inputs, the control computer, control outputs, and additional equipment. The control code was written in Microsoft Visual Basic on a PC type computer. The control code monitors potential faults and allows the turbine to operate in one of eight states: off, start, run, freewheel, low wind shut down, normal wind shutdown, emergency shutdown, and blade parking. The controller also incorporates two {open_quotes}virtual wind turbines,{close_quotes} including a dynamic model of the machine, for code testing. The controller can handle numerous situations for which the original controller was unequipped.

  6. Unix operating system

    CERN Document Server

    Liu, Yukun; Guo, Liwei

    2011-01-01

    ""UNIX Operating System: The Development Tutorial via UNIX Kernel Services"" introduces the hierarchical structure, principles, applications, kernel, shells, development, and management of the UNIX operation systems multi-dimensionally and systematically. It clarifies the natural bond between physical UNIX implementation and general operating system and software engineering theories, and presents self-explanatory illustrations for readers to visualize and understand the obscure relationships and intangible processes in UNIX operating system. This book is intended for engineers and researchers

  7. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  8. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  9. The embedded operating system project

    Science.gov (United States)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  10. Computer system identification

    OpenAIRE

    Lesjak, Borut

    2008-01-01

    The concept of computer system identity in computer science bears just as much importance as does the identity of an individual in a human society. Nevertheless, the identity of a computer system is incomparably harder to determine, because there is no standard system of identification we could use and, moreover, a computer system during its life-time is quite indefinite, since all of its regular and necessary hardware and software upgrades soon make it almost unrecognizable: after a number o...

  11. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration; Crepe-Renaudin, Sabine Chrystel; De, Kaushik

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  12. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  13. Network operating system

    Science.gov (United States)

    1985-01-01

    Long-term and short-term objectives for the development of a network operating system for the Space Station are stated. The short-term objective is to develop a prototype network operating system for a 100 megabit/second fiber optic data bus. The long-term objective is to establish guidelines for writing a detailed specification for a Space Station network operating system. Major milestones are noted. Information is given in outline form.

  14. Primitive parallel operations for computational linear algebra

    Energy Technology Data Exchange (ETDEWEB)

    Panetta, J.

    1985-01-01

    This work is a small step in the direction of code portability over parallel and vector machines. The proposal consists of a style of programming and a set of parallel operators built over abstract data types. Objects and operators are directed to the Computational Linear Algebra area, although the principles of the proposal can be applied to any other area. A subset of the operators was implemented on a 64-processor, distributed memory MIMD machine, and the results are that computationally intensive operators achieve asymptotically optimal speed-ups, but data movement operators are inefficient, some even intrinsically sequential.

  15. Tensor computations in computer algebra systems

    CERN Document Server

    Korolkova, A V; Sevastyanov, L A

    2014-01-01

    This paper considers three types of tensor computations. On their basis, we attempt to formulate criteria that must be satisfied by a computer algebra system dealing with tensors. We briefly overview the current state of tensor computations in different computer algebra systems. The tensor computations are illustrated with appropriate examples implemented in specific systems: Cadabra and Maxima.

  16. Distributed computer control systems

    Energy Technology Data Exchange (ETDEWEB)

    Suski, G.J.

    1986-01-01

    This book focuses on recent advances in the theory, applications and techniques for distributed computer control systems. Contents (partial): Real-time distributed computer control in a flexible manufacturing system. Semantics and implementation problems of channels in a DCCS specification. Broadcast protocols in distributed computer control systems. Design considerations of distributed control architecture for a thermal power plant. The conic toolset for building distributed systems. Network management issues in distributed control systems. Interprocessor communication system architecture in a distributed control system environment. Uni-level homogenous distributed computer control system and optimal system design. A-nets for DCCS design. A methodology for the specification and design of fault tolerant real time systems. An integrated computer control system - architecture design, engineering methodology and practical experience.

  17. Space station operating system study

    Science.gov (United States)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  18. 计算机操作系统的应用与发展分析%Analysis on the Application and Development of Computer Operating System

    Institute of Scientific and Technical Information of China (English)

    钱家乐

    2015-01-01

    自1946年世界上第一台计算机诞生起,计算机的操作系统经历了从无到有、从简单进程到复杂程序的进步。从数十年前需要人们强记繁琐命令参数的DOS,到如今点几下鼠标即可完成复杂操作的系统、Linux及其衍生系统、Mac OS系统,操作系统实现了飞跃式的进步。随着技术的进步,操作系统的功能也愈加丰富,并渗入到社会的各个领域。文章基于操作系统发展历程和应用,试作简单分析。%Since the ifrst computer was born in 1946,the operating system of computer has experienced the progress from scratch,from simple process to complex program. From ten years ago need people to memorize complex order parameter of DOS to these days a few mouse clicks can complete complex operating system,Linux and its derivative systems,Mac OS,the operating system realized the leap forward progress. Along with the progress of technology, the function of the operating system is also more abundant,and has permeated every ifeld of the society. Based on the development history and application of the operating system,the paper makes a simple analysis.

  19. Stochastic power system operation

    OpenAIRE

    Power, Michael

    2010-01-01

    This paper outlines how to economically and reliably operate a power system with high levels of renewable generation which are stochastic in nature. It outlines the challenges for system operators, and suggests tools and methods for meeting this challenge, which is one of the most fundamental since large scale power networks were instituted. The Ireland power system, due to its nature and level of renewable generation, is considered as an example in this paper.

  20. Development of a novel computational tool for optimizing the operation of fuel cells systems: Application for phosphoric acid fuel cells

    Science.gov (United States)

    Zervas, P. L.; Tatsis, A.; Sarimveis, H.; Markatos, N. C. G.

    Fuel cells offer a significant and promising clean technology for portable, automotive and stationary applications and, thus, optimization of their performance is of particular interest. In this study, a novel optimization tool is developed that realistically describes and optimizes the performance of fuel cell systems. First, a 3D steady-state detailed model is produced based on computational fluid dynamics (CFD) techniques. Simulated results obtained from the CFD model are used in a second step, to generate a database that contains the fuel and oxidant volumetric rates and utilizations and the corresponding cell voltages. In the third step mathematical relationships are developed between the input and output variables, using the database that has been generated in the previous step. In particular, the linear regression methodology and the radial basis function (RBF) neural network architecture are utilized for producing the input-output "meta-models". Several statistical tests are used to validate the proposed models. Finally, a multi-objective hierarchical Non-Linear Programming (NLP) problem is formulated that takes into account the constraints and limitations of the system. The multi-objective hierarchical approach is built upon two steps: first, the fuel volumetric rate is minimized, recognizing the fact that our first concern is to reduce consumption of the expensive fuel. In the second step, optimization is performed with respect to the oxidant volumetric rate. The proposed method is illustrated through its application for phosphoric acid fuel cell (PAFC) systems.

  1. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  2. Operation plan : Alviso System

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This is the operation plan for ponds A2W, A3W, A7, A14, and A16 in the Alviso System at San Francisco Bay NWR Complex. Operating instructions for both winter and...

  3. Fault tolerant computing systems

    CERN Document Server

    Randell, B

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection, damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (15 refs).

  4. Distributed operating systems

    NARCIS (Netherlands)

    Mullender, Sape J.

    1987-01-01

    In the past five years, distributed operating systems research has gone through a consolidation phase. On a large number of design issues there is now considerable consensus between different research groups. In this paper, an overview of recent research in distributed systems is given. In turn, th

  5. Control-based operating system design

    CERN Document Server

    Leva, Alberto; Papadopoulos, AV; Terraneo, F

    2013-01-01

    This book argues that computer operating system components should be conceived from the outset as controllers, synthesised and assessed in the system-theoretical world of dynamic models, and then realised as control algorithms.

  6. Decision Making System for Operative Tasks

    OpenAIRE

    Shakah, G.; Krasnoproshin, V. V.; Valvachev, A. N.

    2009-01-01

    Actual problems of construction of computer systems for operative tasks of decision making are considered. possibilities of solving the problems on the basis of the theory of active systems (tas) are investigated.

  7. Computing matrix permanent with collective boson operators

    CERN Document Server

    Huh, Joonsuk

    2016-01-01

    Computing permanents of matrices are known to be a classically hard problem that the computational cost grows exponentially with the size of the matrix increases. So far, there exist a few classical algorithms to compute the matrix permanents in deterministic and in randomized ways. By exploiting the series expansion of products of boson operators regarding collective boson operators, a generalized algorithm for computing permanents is developed that the algorithm can handle the arbitrary matrices with repeated columns and rows. In a particular case, the formula is reduced to Glynn's form. Not only the algorithm can be used for a deterministic direct calculation of the matrix permanent but also can be expressed as a sampling problem like Gurvits's randomized algorithm.

  8. A distributed computing approach to mission operations support. [for spacecraft

    Science.gov (United States)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  9. Microsoft Windows Operating System Essentials

    CERN Document Server

    Carpenter, Tom

    2012-01-01

    A full-color guide to key Windows 7 administration concepts and topics Windows 7 is the leading desktop software, yet it can be a difficult concept to grasp, especially for those new to the field of IT. Microsoft Windows Operating System Essentials is an ideal resource for anyone new to computer administration and looking for a career in computers. Delving into areas such as fundamental Windows 7 administration concepts and various desktop OS topics, this full-color book addresses the skills necessary for individuals looking to break into a career in IT. Each chapter begins with a list of topi

  10. Implementation of NASTRAN on the IBM/370 CMS operating system

    Science.gov (United States)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  11. Computer Security: Security operations at CERN (4/4)

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Stefan Lueders, PhD, graduated from the Swiss Federal Institute of Technology in Zurich and joined CERN in 2002. Being initially developer of a common safety system used in all four experiments at the Large Hadron Collider, he gathered expertise in cyber-security issues of control systems. Consequently in 2004, he took over responsibilities in securing CERN's accelerator and infrastructure control systems against cyber-threats. Subsequently, he joined the CERN Computer Security Incident Response Team and is today heading this team as CERN's Computer Security Officer with the mandate to coordinate all aspects of CERN's computer security --- office computing security, computer centre security, GRID computing security and control system security --- whilst taking into account CERN's operational needs. Dr. Lueders has presented on these topics at many different occasions to international bodies, governments, and companies, and published several articles. With the prevalence of modern information technologies and...

  12. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  13. Computing Fourier integral operators with caustics

    Science.gov (United States)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  14. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  15. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    Science.gov (United States)

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  16. Implications of Using Computer-Based Training with the AN/SQQ-89(v) Sonar System: Operating and Support Costs

    Science.gov (United States)

    2012-06-01

    Defense Science Board ECR Electronic Classroom ERNT Executive Review of Navy Training ETS Engineering and Technical Services EXCEL Excellence...Military Training GOTS Government Off-the-Shelf GWOT Global War on Terror HPSM Human Performance Systems Model HPSO Human Performance Systems...Human Performance Systems Model (HPSM), and to link training and acquisition (Naval Personnel Development Command, 2002). Part of this new strategy

  17. The University of Wisconsin OAO operating system

    Science.gov (United States)

    Heacox, H. C.; Mcnall, J. F.

    1972-01-01

    The Wisconsin OAO operating system is presented which consists of two parts: a computer program called HARUSPEX, which makes possible reasonably efficient and convenient operation of the package and ground operations equipment which provides real-time status monitoring, commanding and a quick-look at the data.

  18. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  19. Raising the Security of Computer Operation Accounting System with Creating Technology System%用系统生成技术提高会计电算化系统的安全性

    Institute of Scientific and Technical Information of China (English)

    毛禹忠; 张迪

    2000-01-01

    会计电算化是管理现代化的重要组成部分,会计电算化系统的安全性一直是管理信息系统的难点之一。根据多年实践的经验,提出采用系统生成技术改进提高会计电算系统的安全性,有助于广大中小企业会计电算化的推广应用。%The accounting with computer operation was the important part of modern management,the security of accounting system with computer operation along were the key of manage information system.Base on the past experience of practice,the security of accounting with computer operation were increased by the creating technique of system,it could help to extend accounting with computer operation for vast middling-small enterprise.

  20. Microprocessors & their operating systems a comprehensive guide to 8, 16 & 32 bit hardware, assembly language & computer architecture

    CERN Document Server

    Holland, R C

    1989-01-01

    Provides a comprehensive guide to all of the major microprocessor families (8, 16 and 32 bit). The hardware aspects and software implications are described, giving the reader an overall understanding of microcomputer architectures. The internal processor operation of each microprocessor device is presented, followed by descriptions of the instruction set and applications for the device. Software considerations are expanded with descriptions and examples of the main high level programming languages (BASIC, Pascal and C). The book also includes detailed descriptions of the three main operatin

  1. Intelligent vision system for autonomous vehicle operations

    Science.gov (United States)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  2. P systems based on tag operations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2012-10-01

    Full Text Available In this article we introduce P systems using Post's tag operation on strings.We show that the computational completeness can be achieved even if the deletion length is equal to one.

  3. Development of Computer Simulation Technique for Operation of Irrigation Channel System%灌溉渠系运行计算机模拟技术的开发

    Institute of Scientific and Technical Information of China (English)

    赵竞成; 陆文红; 赵丽华

    2001-01-01

    借鉴日本以及其他国家在灌溉渠系水管理方面的成果和经验,结合我国灌区的实际情况,建立了较完整的渠系运行模型,编制了具有一定通用性和可扩充性的计算机模拟软件。实践表明,该软件对于测试和评价渠系的水力学特性、工程控制特性和管理调度特性是有效的,它为改进灌区水管理提供了一个科学、简便、可行的技术手段。%Using the results and experiences in water management of irrigation channel systems of Japan and other countries for reference, and combining with the actual situation of China′s irrigation districts, an integrated operation model of channel system is established. The computer simulation software which has certain generalization and extension, is compiled. The practice shows that the software is effective for testing and evaluation hydraulic characteristics, project control characteristics and management-operation characteristics of the channel system. It provides a scientific, simple and convenient and available technical measure for improving water management of irrigation districts.

  4. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  5. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  6. Brain computer interface for operating a robot

    Science.gov (United States)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  7. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J.; Campana, S.; Di Girolamo, A.; Dzhunov, I.; Espinal Curull, X.; Gayazov, S.; Magradze, E.; Nowotka, M.M.; Rinaldi, L.; Saiz, P.; Schovancova, J.; Stewart, G.A.; Wright, M.

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  8. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Campana, S; Di Girolamo, A; Espinal Curull, X; Gayazov, S; Magradze, E; Nowotka, MM; Rinaldi, L; Saiz, P; Schovancova, J; Stewart, GA; Wright, M

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  9. Operating Systems Standards Working Group (OSSWG) Next Generation Computer Resources (NGCR) Program First Annual Report - October 1990

    Science.gov (United States)

    1991-04-01

    Criteria, Evaluation Process and Results/Summary. The Evaluation Criteria will likely be suplemented with sub-criteria. The group is looking at a...child relationships for all jobs in the distributed system. Independent jobs are defined to be the children of a fictional root job. The parent-child...case the process will be terminated, along with its children . If two signals arrive before the first is handled, the 135 3-F-141 first will be lost

  10. Apu/hydraulic/actuator Subsystem Computer Simulation. Space Shuttle Engineering and Operation Support, Engineering Systems Analysis. [for the space shuttle

    Science.gov (United States)

    1975-01-01

    Major developments are examined which have taken place to date in the analysis of the power and energy demands on the APU/Hydraulic/Actuator Subsystem for space shuttle during the entry-to-touchdown (not including rollout) flight regime. These developments are given in the form of two subroutines which were written for use with the Space Shuttle Functional Simulator. The first subroutine calculates the power and energy demand on each of the three hydraulic systems due to control surface (inboard/outboard elevons, rudder, speedbrake, and body flap) activity. The second subroutine incorporates the R. I. priority rate limiting logic which limits control surface deflection rates as a function of the number of failed hydraulic. Typical results of this analysis are included, and listings of the subroutines are presented in appendicies.

  11. Analysis of C-shaped canal systems in mandibular second molars using surgical operating microscope and cone beam computed tomography: A clinical approach

    Directory of Open Access Journals (Sweden)

    Sanjay Chhabra

    2014-01-01

    Full Text Available Aims: The study was aimed to acquire better understanding of C-shaped canal systems in mandibular second molar teeth through a clinical approach using sophisticated techniques such as surgical operating microscope and cone beam computed tomography (CBCT. Materials and Methods: A total of 42 extracted mandibular second molar teeth with fused roots and longitudinal grooves were collected randomly from native Indian population. Pulp chamber floors of all specimens were examined under surgical operating microscope and classified into four types (Min′s method. Subsequently, samples were subjected to CBCT scan after insertion of K-files size #10 or 15 into each canal orifice and evaluated using the cross-sectional and 3-dimensional images in consultation with dental radiologist so as to obtain more accurate results. Minimum distance between the external root surface on the groove and initial file placed in the canal was also measured at different levels and statistically analyzed. Results: Out of 42 teeth, maximum number of samples (15 belonged to Type-II category. A total of 100 files were inserted in 86 orifices of various types of specimens. Evaluation of the CBCT scan images of the teeth revealed that a total of 21 canals were missing completely or partially at different levels. The mean values for the minimum thickness were highest at coronal followed by middle and apical third levels in all the categories. Lowest values were obtained for teeth with Type-III category at all three levels. Conclusions: The present study revealed anatomical variations of C-shaped canal system in mandibular second molars. The prognosis of such complex canal anatomies can be improved by simultaneous employment of modern techniques such as surgical operating microscope and CBCT.

  12. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  13. Pacing a data transfer operation between compute nodes on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  14. Computational systems chemical biology.

    Science.gov (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  15. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  16. The Remote Computer Control (RCC) system

    Science.gov (United States)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  17. Aircraft Operations Classification System

    Science.gov (United States)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  18. Celebrating 50 years of the CERN Computing Operations group

    CERN Multimedia

    Katarina Anthony

    2013-01-01

    Last week, former and current computing operations staff, managers and system engineers were reunited at CERN. They came together to celebrate a milestone not only for the IT Department but also for CERN: the 50th anniversary of the CERN Operations group and the 40th birthday of the Computer Centre.   The reunion was organised by former chief operator, Pierre Bénassi, and took place from 26 to 27 April. Among the 44 attendees were Neil Spoonley and Charles Symons, who together created the Operations group back in 1963. “At that time, working in the Operations group was a very physical job,” recalls former Operations Group Leader, David Underhill. “For that reason, many of the first operators were former firemen.” A few of the participants enjoyed a tour of CERN landmarks during their visit (see photo). The group toured the CERN Computing Centre (accompanied by IT Department Head, Frédéric Hemmer), as well as the ATLAS cav...

  19. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  20. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  1. Cloud Computing for Mission Design and Operations

    Science.gov (United States)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  2. Vessel Operator System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Operator cards are required for any operator of a charter/party boat and or a commercial vessel (including carrier and processor vessels) issued a vessel permit from...

  3. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  4. Artificial intelligence issues related to automated computing operations

    Science.gov (United States)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  5. Security of Operation on CSR Control System

    Institute of Scientific and Technical Information of China (English)

    GouShizhe; QiaoWeimin; JingLan

    2003-01-01

    It is important to the security of operation on the CSR control system. In order to keep the CSR control system in security environment, the following work has been done. Firstly, it can be set up a domain service, and every important services can join in it, such as databasese rvices, front web services and every interactive operating browsers, and limited the browsers right by using policy of the domain. After this, the browsers can't modify the setting of the browser, and it can keep every computers and browsers in security by preventing some virus into each computer. The domain services of control system is shown in Fig.1.

  6. Potato operation: computer vision for agricultural robotics

    Science.gov (United States)

    Pun, Thierry; Lefebvre, Marc; Gil, Sylvia; Brunet, Denis; Dessimoz, Jean-Daniel; Guegerli, Paul

    1992-03-01

    Each year at harvest time millions of seed potatoes are checked for the presence of viruses by means of an Elisa test. The Potato Operation aims at automatizing the potato manipulation and pulp sampling procedure, starting from bunches of harvested potatoes and ending with the deposit of potato pulp into Elisa containers. Automatizing these manipulations addresses several issues, linking robotic and computer vision. The paper reports on the current status of this project. It first summarizes the robotic aspects, which consist of locating a potato in a bunch, grasping it, positioning it into the camera field of view, pumping the pulp sample and depositing it into a container. The computer vision aspects are then detailed. They concern locating particular potatoes in a bunch and finding the position of the best germ where the drill has to sample the pulp. The emphasis is put on the germ location problem. A general overview of the approach is given, which combines the processing of both frontal and silhouette views of the potato, together with movements of the robot arm (active vision). Frontal and silhouette analysis algorithms are then presented. Results are shown that confirm the feasibility of the approach.

  7. Prevalence of neck pain in computer operators

    Directory of Open Access Journals (Sweden)

    S A Shah

    2015-01-01

    Full Text Available Abstract Introduction: Persisting neck pain is common in society, especially in office-workers. Although, neck pain is common source of disability, little is known about its prevalence and course. The bulk of literature available on this problem is in west with few studies done in an Indian setup. India is a middle-income developing country. The importance of this kind of studies becomes more obvious when it is considered that some reports indicate that the greatest increase in the prevalence of musculo-skeletal disorders in the next decade will be in middle-/low-income countries. Aims & objectives: The Primary aim of this research was to study the prevalence of neck pain in computer operators. Methodology: Study design: Cross Sectional Study. Sampling technique: Simple Random Sampling. Study subject: They should be working on computer for at least 3 hours / day or 15 hours/week, in current job for at least past 6 months & should be willing to participate in the study. Technique: Study was approved from the Institutional Ethics Committee. Informed consent was taken prior to data collection. Data were collected from 700 subjects via structured mailed questionnaire which included individual variables & work related variables. Conclusions: Prevalence of neck pain was found to be 47%. The study shows that neck pain is affected by individual variables and work related variables.

  8. Do flow principles of operations management apply to computing centres?

    CERN Document Server

    Abaunza, Felipe; Hameri, Ari-Pekka; Niemi, Tapio

    2014-01-01

    By analysing large data-sets on jobs processed in major computing centres, we study how operations management principles apply to these modern day processing plants. We show that Little’s Law on long-term performance averages holds to computing centres, i.e. work-in-progress equals throughput rate multiplied by process lead time. Contrary to traditional manufacturing principles, the law of variation does not hold to computing centres, as the more variation in job lead times the better the throughput and utilisation of the system. We also show that as the utilisation of the system increases lead times and work-in-progress increase, which complies with traditional manufacturing. In comparison with current computing centre operations these results imply that better allocation of jobs could increase throughput and utilisation, while less computing resources are needed, thus increasing the overall efficiency of the centre. From a theoretical point of view, in a system with close to zero set-up times, as in the c...

  9. Computational Aeroacoustic Analysis System Development

    Science.gov (United States)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  10. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  11. A Multiprocessor Operating System Simulator

    Science.gov (United States)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  12. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    Science.gov (United States)

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  13. Windows And Linux Operating Systems From A Security Perspective

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    Operating systems are vital system software that, without them, humans would not be able to manage and use computer systems. In essence, an operating system is a collection of software programs whose role is to manage computer resources and provide an interface for client applications to interact with the different computer hardware. Most of the commercial operating systems available today on the market have buggy code and they exhibit security flaws and vulnerabilities. In effect, building a trusted operating system that can mostly resist attacks and provide a secure computing environment to protect the important assets of a computer is the goal of every operating system manufacturer. This paper deeply investigates the various security features of the two most widespread and successful operating systems, Microsoft Windows and Linux. The different security features, designs, and components of the two systems are to be covered elaborately, pin-pointing the key similarities and differences between them. In due ...

  14. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Science.gov (United States)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  15. UNIX-based operating systems robustness evaluation

    Science.gov (United States)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  16. SPECTR System Operational Test Report

    Energy Technology Data Exchange (ETDEWEB)

    W.H. Landman Jr.

    2011-08-01

    This report overviews installation of the Small Pressure Cycling Test Rig (SPECTR) and documents the system operational testing performed to demonstrate that it meets the requirements for operations. The system operational testing involved operation of the furnace system to the design conditions and demonstration of the test article gas supply system using a simulated test article. The furnace and test article systems were demonstrated to meet the design requirements for the Next Generation Nuclear Plant. Therefore, the system is deemed acceptable and is ready for actual test article testing.

  17. Interpreting pre-operative mastoid computed tomography images: comparison between operating surgeon, radiologist and operative findings.

    Science.gov (United States)

    Badran, K; Ansari, S; Al Sam, R; Al Husami, Y; Iyer, A

    2016-01-01

    This study aimed to compare the interpretations of temporal bone computed tomography scans by an otologist and a radiologist with a special interest in temporal bone imaging. It also aimed to determine the usefulness of this imaging modality. A head and neck radiologist and an otologist separately reported pre-operative computed tomography images using a structured proforma. The reports were then compared with operative findings to determine their accuracy and differences in interpretations. Forty-eight patients who underwent pre-operative computed tomography scans in a 30-month period were identified. Six patients were excluded because complete operative findings had not been recorded. Positive and negative predictive values and accuracy of the anatomical and pathological findings were calculated for 42 patients by both reporters. The accuracy was found to be less than 80 per cent, except for identification of the tegmen and lateral semicircular canal erosion. Overall, there was no significant difference in interpretations of computed tomography scans between reporters. There was a slight difference in interpretation for tympanic membrane retraction, facial canal erosion and lateral semicircular canal fistula and/or erosion. Pre-operative computed tomography scanning of the temporal bone is useful for predicting anatomy for surgical planning in patients with chronic otitis media, but its reliability remains questionable.

  18. Transportation System Concept of Operations

    Energy Technology Data Exchange (ETDEWEB)

    N. Slater-Thompson

    2006-08-16

    The Nuclear Waste Policy Act of 1982 (NWPA), as amended, authorized the DOE to develop and manage a Federal system for the disposal of SNF and HLW. OCRWM was created to manage acceptance and disposal of SNF and HLW in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. This responsibility includes managing the transportation of SNF and HLW from origin sites to the Repository for disposal. The Transportation System Concept of Operations is the core high-level OCRWM document written to describe the Transportation System integrated design and present the vision, mission, and goals for Transportation System operations. By defining the functions, processes, and critical interfaces of this system early in the system development phase, programmatic risks are minimized, system costs are contained, and system operations are better managed, safer, and more secure. This document also facilitates discussions and understanding among parties responsible for the design, development, and operation of the Transportation System. Such understanding is important for the timely development of system requirements and identification of system interfaces. Information provided in the Transportation System Concept of Operations includes: the functions and key components of the Transportation System; system component interactions; flows of information within the system; the general operating sequences; and the internal and external factors affecting transportation operations. The Transportation System Concept of Operations reflects OCRWM's overall waste management system policies and mission objectives, and as such provides a description of the preferred state of system operation. The description of general Transportation System operating functions in the Transportation System Concept of Operations is the first step in the OCRWM systems engineering process, establishing the starting point for the lower

  19. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  20. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  1. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  2. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  3. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  4. THz Spectrophotometer Operating System

    OpenAIRE

    Arwin, Emil

    2008-01-01

    The Complex Materials Optics Network comprises active research groups within the University of Nebraska-Lincoln. Their main focus is optical materials preparation, characterization, and instrumentation development. The purpose of the project is to develop a computer interface for a Terahertz-source and detector. The interface should consist of a manual and a remote Transmission Control Protocol/Internet Protocol (TCP/IP) control of the hardware and must display the status of the source and th...

  5. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.

    2010-09-28

    Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

  6. Operating System Abstraction Layer (OSAL)

    Science.gov (United States)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  7. Multiple operating system rotation environment moving target defense

    Science.gov (United States)

    Evans, Nathaniel; Thompson, Michael

    2016-03-22

    Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.

  8. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  9. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the a

  10. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    Science.gov (United States)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  11. Computing abstractions of nonlinear systems

    CERN Document Server

    Reißig, Gunther

    2009-01-01

    We present an efficient algorithm for computing discrete abstractions of arbitrary memory span for nonlinear discrete-time and sampled systems, in which, apart from possibly numerically integrating ordinary differential equations, the only nontrivial operation to be performed repeatedly is to distinguish empty from non-empty convex polyhedra. We also provide sufficient conditions for the convexity of attainable sets, which is an important requirement for the correctness of the method we propose. It turns out that requirement can be met under rather mild conditions, which essentially reduce to sufficient smoothness in the case of sampled systems. Practicability of our approach in the design of discrete controllers for continuous plants is demonstrated by an example.

  12. Port Operational Marine Observing System

    Science.gov (United States)

    Palazov, A.; Stefanov, A.; Slabakova, V.; Marinova, V.

    2009-04-01

    The Port Operational Marine Observing System (POMOS) is a network of distributed sensors and centralized data collecting, processing and distributing unit. The system is designed to allow for the real-time assessment of weather and marine conditions throughout the major Bulgarian ports: Varna, Burgas and Balchik, supporting thereby Maritime administration to secure safety navigation in bays, canals and ports. Real-time information within harbors is obtained using various sensors placed at thirteen strategic locations to monitor the current state of the environment. The most important for navigation weather and sea-state parameters are measured: wind speed and direction, air temperature, relative humidity, atmospheric pressure, visibility, solar radiation, water temperature and salinity, sea level, currents speed and direction, mean wave's parameters. The system consist of: 11 weather stations (3 with extra solar radiation and 4 with extra visibility measurement), 9 water temperature and salinity sensors, 9 sea-level stations, two sea currents and waves stations and two canal currents stations. All sensors are connected to communication system which provides direct intranet access to the instruments. Every 15 minutes measured data is transmitted in real-time to the central collecting system, where data is collected, processed and stored in database. Database is triple secured to prevent data losses. Data collection system is double secured. Measuring system is secured against short power failure and instability. Special software is designed to collect, store, process and present environmental data and information on different user-friendly screens. Access to data and information is through internet/intranet with the help of browsers. Actual data from all measurements or from separate measuring place can be displayed on the computer screens as well as data for the last 24 hours. Historical data are available using report server for extracting data for selectable

  13. SoOSiM: Operating System and Programming Language Exploration

    NARCIS (Netherlands)

    Baaij, Christiaan; Kuper, Jan; Schubert, Lutz; Lipari, G.; Cucinotta, T.

    2012-01-01

    SoOSiM is a simulator developed for the purpose of exploring operating system concepts and operating system modules. The simulator provides a highly abstracted view of a computing system, consisting of computing nodes, and components that are concurrently executed on these nodes. OS modules are subs

  14. SoOSiM: Operating System and Programming Language Exploration

    NARCIS (Netherlands)

    Baaij, C.P.R.; Kuper, Jan; Schubert, Lutz; Lipari, G.; Cucinotta, T.

    2012-01-01

    SoOSiM is a simulator developed for the purpose of exploring operating system concepts and operating system modules. The simulator provides a highly abstracted view of a computing system, consisting of computing nodes, and components that are concurrently executed on these nodes. OS modules are

  15. GODAE Systems in Operation

    Science.gov (United States)

    2009-10-09

    journal of Marine Systems 25:1-22. Bertino, L., and K.A. Lisaeter. 2008. The TOPAZ monitoring and prediction system for the Atlantic and Arctic oceans...assimilation in oceanography. Journal of Marine Systems 16(3-4):323-340. Pinardi, N., I. Allen, E. Demirov, P. De Mey, G. Korres, A. Lascaratos, P.-Y. Le...Smedstad, A.J. Wallcraft, and R.C. Rhodes. 2007.1/32° real- time global ocean prediction and value-added over 1/16° resolution. Journal of Marine

  16. The SILEX experiment system operations

    Science.gov (United States)

    Demelenne, B.

    1994-11-01

    The European Space Agency is going to conduct an inter orbit link experiment which will connect a low Earth orbiting satellite and a Geostationary satellite via optical terminals. This experiment has been called SILEX (Semiconductor Inter satellite Link EXperiment). Two payloads will be built. One called PASTEL (PASsager de TELecommunication) will be embarked on the French Earth observation satellite SPOT4. The future European experimental data relay satellite ARTEMIS (Advanced Relay and TEchnology MISsion) will carry the OPALE terminal (Optical PAyload Experiment). The principal characteristic of the mission is a 50 Megabits flow of data transmitted via the optical satellite link. The relay satellite will route the data via its feeder link thus permitting a real time reception in the European region of images taken by the observation satellite. The PASTEL terminal has been designed to cover up to 9 communication sessions per day with an average of 5. The number of daily contact opportunities with the low earth orbiting satellite will be increased and the duration will be much longer than the traditional passes over a ground station. The terminals have an autonomy of 24 hours with respect to ground control. Each terminal will contain its own orbit model and that of its counter terminal for orbit prediction and for precise computation of pointing direction. Due to the very narrow field of view of the communication laser beam, the orbit propagation calculation needs to be done with a very high accuracy. The European Space Agency is responsible for the operation of both terminals. A PASTEL Mission Control System (PMCS) is being developed to control the PASTEL terminal on board SPOT4. The PMCS will interface with the SPOT4 Control Centre for the execution of the PASTEL operations. The PMCS will also interface with the ARTEMIS Mission Control System for the planning and the coordination of the operation. It is the first time that laser technology will be used to support

  17. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  18. Central nervous system and computation.

    Science.gov (United States)

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  19. Robot operating system

    Energy Technology Data Exchange (ETDEWEB)

    Ozawa, Fusaaki; Sugiyama, Kengo

    1988-10-15

    From the viewpoint of electric motor, and hydraulic and pneumatic drivings mainly for the industrial robot operation, the present conditions and themes of engineering were explained. The actuators to be adopted to the electric robots initially were mainly stepping or DC servo motors and have recently been becoming mainly brushless DC motors. The reduction gear driving is problematic in back lash, rigidity and resonance, against which different countermeasures are studied. The direct driving, though having completely overcome those points of problem, has many other points of problem to be contrariwise overcome, eg., high accuracy resolution detecor, high torque break, rigidity maintaining method by savo alone, etc., which are advisable to be solved and developed. Because the hydraulic robots are problematic in hydraulic compressibility and change in characteristics due to oil temperature, and that the pneumatic robots are also problematic in respondability and high accuracy controllability, the research is now active in the controlling engineering. (7 figs, 20 refs)

  20. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco

    2012-01-01

    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  1. Evaluating operating system vulnerability to memory errors.

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  2. Cybersecurity of embedded computers systems

    OpenAIRE

    Carlioz, Jean

    2016-01-01

    International audience; Several articles have recently raised the issue of computer security of commercial flights by evoking the "connected aircraft, hackers target" or "Wi-Fi on planes, an open door for hackers ? " Or "Can you hack the computer of an Airbus or a Boeing ?". The feared scenario consists in a takeover of operational aircraft software that intentionally cause an accident. Moreover, several computer security experts have lately announced they had detected flaws in embedded syste...

  3. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  4. The THUDSOS Distributed Operating System

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 刘旭峰; 等

    1991-01-01

    The THUDSOS is a distributed operating system modeled as an abstract machine which provides decentralized control,transparency,availability,and reliability,as welol as a good degree of autonomy at each node,that makes our distributed system usable.Our operating system supports transparent access to data through network wide filesystem.The simultaneous access to any device is discussed for the case when the peripherals are treated as files.This operating system allows spawning of parallel application programs to solve problems in the fields,such as numerical analysis and artificial intelligence.

  5. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  6. A COMPUTERIZED OPERATOR SUPPORT SYSTEM PROTOTYPE

    Energy Technology Data Exchange (ETDEWEB)

    Thomas A. Ulrich; Roger Lew; Ronald L. Boring; Ken Thomas

    2015-03-01

    A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. A prototype COSS was developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based on four underlying elements consisting of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. The initial version of the prototype is now operational at the Idaho National Laboratory using the Human System Simulation Laboratory.

  7. Remote computer monitors corrosion protection system

    Energy Technology Data Exchange (ETDEWEB)

    Kendrick, A.

    Effective corrosion protection with electrochemical methods requires some method of routine monitoring that provides reliable data that is free of human error. A test installation of a remote computer control monitoring system for electrochemical corrosion protection is described. The unit can handle up to six channel inputs. Each channel comprises 3 analog signals and 1 digital. The operation of the system is discussed.

  8. Clinical Study of Intra-operative Computed Tomography Guided Localization with A Hook-wire System for Small Ground Glass Opacities in Minimally Invasive Resection

    Directory of Open Access Journals (Sweden)

    Xiangyang CHU

    2014-12-01

    Full Text Available Background and objective Localization of pulmonary ground glass small nodule is the technical difficulty of minimally invasive operation resection. The aim of this study is to evaluate the value of intraoperative computed tomography (CT-guided localization using a hook-wire system for small ground glass opacity (GGO in minimally invasive resection, as well as to discuss the necessity and feasibility of surgical resection of small GGOs (<10 mm through a minimally invasive approach. Methods The records of 32 patients with 41 small GGOs who underwent intraoperative CT-guided double-thorn hook wire localization prior to video-assisted thoracoscopic wedge resection from October 2009 to October 2013 were retrospectively reviewed. All patients received video-assisted thoracoscopic surgery (VATS within 10 min after wire localization. The efficacy of intraoperative localization was evaluated in terms of procedure time, VATS success rate, and associated complications of localization. Results A total of 32 patients (15 males and 17 females underwent 41 VATS resections, with 2 simultaneous nodule resections performed in 3 patients, 3 lesion resections in 1 patient, and 5 lesions in a patient. Nodule diameters ranged from 2 mm-10 mm (mean: 5 mm. The distance of lung lesions from the nearest pleural surfaces ranged within 5 mm-24 mm (mean: 12.5 mm. All resections of lesions guided by the inserted hook wires were successfully performed by VATS (100% success rate. The mean procedure time for the CT-guided hook wire localization was 8.4 min (range: 4 min-18 min. The mean procedure time for VATS was 32 min (range: 14 min-98 min. The median hospital time was 8 d (range: 5 d-14 d. Results of pathological examination revealed 28 primary lung cancers, 9 atypical adenomatous hyperplasia, and 4 nonspecific chronic inflammations. No major complication related to the intraoperative hook wire localization and VATS was noted. Conclusion Intraoperative CT-guided hook wire

  9. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  10. Network operating system focus technology

    Science.gov (United States)

    1985-01-01

    An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.

  11. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  12. Computer monitors and controls all truck-shovel operations

    Energy Technology Data Exchange (ETDEWEB)

    Chironis, N.P.

    1985-03-01

    The intense competition in the coal industry and the advances in computer technology have led several large mines to consider computer dispatching systems as a means of optimizing production. Quintette Coal, Ltd., of Vancouver, B.C., has engaged Modular Mining Systems, Inc., of Tucson, to install a comprehensive truck-dispatch system at a new, multiseam mine northeast of Vancouver. This open-pit operation will rely on truck-shovel teams to uncover both steam and metallurgical coal. The mine is already using about 12 shovels and 50 trucks to produce 3 million tpy. By 1986, production will hit 5 million tpy of metallurgical coal and 1.3 million tpy of steam coal. The coal is under contract to be shipped to Japan. Denison Mines Ltd., owns 50% of Quintette Coal. Of the other 14 shareholders, 10 are Japanese steel companies. Although about 10 non-coal mines worldwide are using some form of computer-controlled dispatching system, Quintette is the first coal company to do so and western US mines are reportedly studying the Quintette system carefully.

  13. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  14. New computing systems and their impact on computational mechanics

    Science.gov (United States)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  15. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  16. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  17. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  18. DMSP OLS - Operational Linescan System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Visible and infrared imagery from DMSP Operational Linescan System (OLS) instruments are used to monitor the global distribution of clouds and cloud top temperatures...

  19. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  20. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  1. Computer-Aided Model Based Analysis for Design and Operation of a Copolymerization Process

    DEFF Research Database (Denmark)

    Lopez-Arenas, Maria Teresa; Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul

    2006-01-01

    The advances in computer science and computational algorithms for process modelling, process simulation, numerical methods and design/synthesis algorithms, makes it advantageous and helpful to employ computer-aided modelling systems and tools for integrated process analysis. This is illustrated....... This will allow analysis of the process behaviour, contribute to a better understanding of the polymerization process, help to avoid unsafe conditions of operation, and to develop operational and optimizing control strategies. In this work, through a computer-aided modeling system ICAS-MoT, two first......, the process design and conditions of operation on the polymer grade and the production rate....

  2. Comparing the architecture of Grid Computing and Cloud Computing systems

    Directory of Open Access Journals (Sweden)

    Abdollah Doavi

    2015-09-01

    Full Text Available Grid Computing or computational connected networks is a new network model that allows the possibility of massive computational operations using the connected resources, in fact, it is a new generation of distributed networks. Grid architecture is recommended because the widespread nature of the Internet makes an exciting environment called 'Grid' to create a scalable system with high-performance, generalized and secure. Then the central architecture called to this goal is a firmware named GridOS. The term 'cloud computing' means the development and deployment of Internet –based computing technology. This is a style of computing in an environment where IT-related capabilities offered as a service or users services. And it allows him/her to have access to technology-based services on the Internet; without the user having the specific information about this technology or (s he wants to take control of the IT infrastructure supported by him/her. In the paper, general explanations are given about the systems Grid and Cloud. Then their provided components and services are checked by these systems and their security.

  3. Standard operating procedure for computing pangenome trees

    DEFF Research Database (Denmark)

    Snipen, L.; Ussery, David

    2010-01-01

    We present the pan-genome tree as a tool for visualizing similarities and differences between closely related microbial genomes within a species or genus. Distance between genomes is computed as a weighted relative Manhattan distance based on gene family presence/absence. The weights can be chose...

  4. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  5. Redefining Tactical Operations for MER Using Cloud Computing

    Science.gov (United States)

    Joswig, Joseph C.; Shams, Khawaja S.

    2011-01-01

    The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed

  6. Redefining Tactical Operations for MER Using Cloud Computing

    Science.gov (United States)

    Joswig, Joseph C.; Shams, Khawaja S.

    2011-01-01

    The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed

  7. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  8. Video animation system operators manual

    Energy Technology Data Exchange (ETDEWEB)

    Mareda, J.F.

    1992-09-01

    This document describes the components necessary to put together a video animation system. It is primarily intended for use at Sandia National Laboratories as it describes the components used in systems at Sandia. The main document covers the operation of the equipment in some detail and is intended for either the system maintainer or an advanced user. There is an appendix for each of the three systems in use by the Engineering Sciences Directorate which contain instructions for the general user.

  9. Executing a gather operation on a parallel computer

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2012-03-20

    Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

  10. Dynamical Systems Some Computational Problems

    CERN Document Server

    Guckenheimer, J; Guckenheimer, John; Worfolk, Patrick

    1993-01-01

    We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal -- there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.

  11. Operator versus computer control of adaptive automation

    Science.gov (United States)

    Hilburn, Brian; Molloy, Robert; Wong, Dick; Parasuraman, Raja

    1993-01-01

    Adaptive automation refers to real-time allocation of functions between the human operator and automated subsystems. The article reports the results of a series of experiments whose aim is to examine the effects of adaptive automation on operator performance during multi-task flight simulation, and to provide an empirical basis for evaluations of different forms of adaptive logic. The combined results of these studies suggest several things. First, it appears that either excessively long, or excessively short, adaptation cycles can limit the effectiveness of adaptive automation in enhancing operator performance of both primary flight and monitoring tasks. Second, occasional brief reversions to manual control can counter some of the monitoring inefficiency typically associated with long cycle automation, and further, that benefits of such reversions can be sustained for some time after return to automated control. Third, no evidence was found that the benefits of such reversions depend on the adaptive logic by which long-cycle adaptive switches are triggered.

  12. Computational Systems Chemical Biology

    OpenAIRE

    Oprea, Tudor I.; Elebeoba E. May; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  13. Determining collective barrier operation skew in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

  14. Standard operating procedure for computing pangenome trees

    OpenAIRE

    Snipen, L; Ussery, David

    2010-01-01

    We present the pan-genome tree as a tool for visualizing similarities and differences between closely related microbial genomes within a species or genus. Distance between genomes is computed as a weighted relative Manhattan distance based on gene family presence/absence. The weights can be chosen with emphasis on groups of gene families conserved to various degrees inside the pan-genome. The software is available for free as an R-package.

  15. Standard operating procedure for computing pangenome trees.

    Science.gov (United States)

    Snipen, Lars; Ussery, David W

    2010-01-28

    We present the pan-genome tree as a tool for visualizing similarities and differences between closely related microbial genomes within a species or genus. Distance between genomes is computed as a weighted relative Manhattan distance based on gene family presence/absence. The weights can be chosen with emphasis on groups of gene families conserved to various degrees inside the pan-genome. The software is available for free as an R-package.

  16. On the operating point of cortical computation

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Robert; Stimberg, Marcel; Wimmer, Klaus; Obermayer, Klaus, E-mail: oby@cs.tu-berlin.d [Bernstein Center for Computational Neuroscience Berlin and School of Electrical Engineering and Computer Science, Technische Universitaet Berlin, FR 2-1, Franklinstr. 28/29, D-10587 Berlin (Germany)

    2010-06-01

    In this paper, we consider a class of network models of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map, as found in primary visual cortex (V1). We systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input in order to characterize different operating regimes of the network. We then compare the map-location dependence of the tuning in the networks with different parametrizations with the neuronal tuning measured in cat V1 in vivo. By considering the tuning of neuronal dynamic and state variables, conductances and membrane potential respectively, our quantitative analysis is able to constrain the operating regime of V1: The data provide strong evidence for a network, in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition, operating in vivo. Interestingly, this recurrent regime is close to a regime of 'instability', characterized by strong, self-sustained activity. The firing rate of neurons in the best-fitting model network is therefore particularly sensitive to small modulations of model parameters, possibly one of the functional benefits of this particular operating regime.

  17. Redesigning the District Operating System

    Science.gov (United States)

    Hodas, Steven

    2015-01-01

    In this paper, we look at the inner workings of a school district through the lens of the "district operating system (DOS)," a set of interlocking mutually-reinforcing modules that includes functions like procurement, contracting, data and IT policy, the general counsel's office, human resources, and the systems for employee and family…

  18. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  19. The Operating System Management of The Students' Computers in the College Computer Rooms%高校计算机教室学生机操作系统的管理

    Institute of Scientific and Technical Information of China (English)

    孙敏凤

    2011-01-01

    With the rapid development of computer technology,the operating system's security of the students' computers are seriously threatened.After years of working experience in college computer rooms,combined with the technology of registry、group policy、disk quotas and network technology,it can solve the problem of security and the difficulty of controlling the students' computers.Then it can provide the teachers with the quality services,also improve the students' the degree of concentration.%随着计算机技术的迅速发展,高校机房的学生机操作系统的安全性受到了严重的威胁。经多年的高校机房的工作经验,结合常用的注册表、组策略、磁盘配额以及网络的相关技术来解决常见的安全性问题,为高校教学提供优质的服务,解决了学生机控制的难点。

  20. Computer algebra in systems biology

    CERN Document Server

    Laubenbacher, Reinhard

    2007-01-01

    Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli.

  1. WATER SUPPLY SYSTEMS OPERATIONAL PROGNOSIS

    Directory of Open Access Journals (Sweden)

    Bruno Santos Vieira

    2016-12-01

    Full Text Available The actions planning to minimize risks and ensure the effectiveness of water supply systems requires the use of appropriate forecasting models. In fact, forecasting the behavior and analysis of future scenarios can be supported with the use of techniques and simulation models. In this article, we propose a procedure to simulate the actions of decision-makers in planning the operation of this system type in order to obtain an operating and financial prognosis that consider dynamic influences. The applicability of the proposed procedure is demonstrated through an urban systems planning problem of water supply. As a result we obtained a system costs distribution odds, which improves decision making in the context of the analyzed system. Additionally, the proposed procedure is applicable to other types of complex systems subject to dynamic influences.

  2. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  3. Gas-operated motor systems

    Energy Technology Data Exchange (ETDEWEB)

    Rilett, J.W.

    1980-09-30

    A gas-operated motor system of the stored energy type-as disclosed in U.S. Pat. No. 4,092,830-in which the gas exhausted from the motor is ducted to a chamber during operation of the motor and thereafter compressed back into the gas reservoir vessel. Recompression may be achieved, e.g., by providing the exhaust gas chamber with a movable piston, or by running the motor in the reverse mode as a compressor.

  4. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  5. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  6. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  7. Computation and design of autonomous intelligent systems

    Science.gov (United States)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  8. Computer-Aided Transformation of PDE Models: Languages, Representations, and a Calculus of Operations

    Science.gov (United States)

    2016-01-05

    Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems

  9. Robot computer problem solving system

    Science.gov (United States)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  10. Operating System for Runtime Reconfigurable Multiprocessor Systems

    Directory of Open Access Journals (Sweden)

    Diana Göhringer

    2011-01-01

    Full Text Available Operating systems traditionally handle the task scheduling of one or more application instances on processor-like hardware architectures. RAMPSoC, a novel runtime adaptive multiprocessor System-on-Chip, exploits the dynamic reconfiguration on FPGAs to generate, start and terminate hardware and software tasks. The hardware tasks have to be transferred to the reconfigurable hardware via a configuration access port. The software tasks can be loaded into the local memory of the respective IP core either via the configuration access port or via the on-chip communication infrastructure (e.g. a Network-on-Chip. Recent-series of Xilinx FPGAs, such as Virtex-5, provide two Internal Configuration Access Ports, which cannot be accessed simultaneously. To prevent conflicts, the access to these ports as well as the hardware resource management needs to be controlled, e.g. by a special-purpose operating system running on an embedded processor. For that purpose and to handle the relations between temporally and spatially scheduled operations, the novel approach of an operating system is of high importance. This special purpose operating system, called CAP-OS (Configuration Access Port-Operating System, which will be presented in this paper, supports the clients using the configuration port with the services of priority-based access scheduling, hardware task mapping and resource management.

  11. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  12. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  13. Advanced Transport Operating Systems Program

    Science.gov (United States)

    White, John J.

    1990-01-01

    NASA-Langley's Advanced Transport Operating Systems Program employs a heavily instrumented, B 737-100 as its Transport Systems Research Vehicle (TRSV). The TRSV has been used during the demonstration trials of the Time Reference Scanning Beam Microwave Landing System (TRSB MLS), the '4D flight-management' concept, ATC data links, and airborne windshear sensors. The credibility obtainable from successful flight test experiments is often a critical factor in the granting of substantial commitments for commercial implementation by the FAA and industry. In the case of the TRSB MLS, flight test demonstrations were decisive to its selection as the standard landing system by the ICAO.

  14. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  15. Telemetry Computer System at Wallops Flight Center

    Science.gov (United States)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  16. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  17. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  18. Monitoring data transfer latency in CMS computing operations

    CERN Document Server

    Bonacorsi, D; Magini, N; Sartirana, A; Taze, M; Wildish, T

    2015-01-01

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, and to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different fact...

  19. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    Science.gov (United States)

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  20. Basic Operational Robotics Instructional System

    Science.gov (United States)

    Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John

    2013-01-01

    The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.

  1. ADAMS executive and operating system

    Science.gov (United States)

    Pittman, W. D.

    1981-01-01

    The ADAMS Executive and Operating System, a multitasking environment under which a variety of data reduction, display and utility programs are executed, a system which provides a high level of isolation between programs allowing them to be developed and modified independently, is described. The Airborne Data Analysis/Monitor System (ADAMS) was developed to provide a real time data monitoring and analysis capability onboard Boeing commercial airplanes during flight testing. It inputs sensor data from an airplane performance data by applying transforms to the collected sensor data, and presents this data to test personnel via various display media. Current utilization and future development are addressed.

  2. Operator radiation exposure in cone-beam computed tomography guidance

    NARCIS (Netherlands)

    Braak, S.J.; Strijen Van, M. J L; Meijer, E.; Heesewijk Van, J. P M; Mali, W. P T M

    2016-01-01

    Objectives: Quantitative analysis of operator dose in cone-beam computed tomography guidance (CBCT-guidance) and the effect of protective shielding. Methods: Using a Rando phantom, a model was set-up to measure radiation dose for the operator hand, thyroid and gonad region. The effect of sterile rad

  3. Computational examples of rational string operations on Gorenstein spaces

    OpenAIRE

    Naito, Takahito

    2015-01-01

    In this paper, we give computational examples of string operations over the rational numbers field on Gorenstein spaces introduced by Félix and Thomas. Especially, we determine the structure of rational string operations on the classifying space of a compact connected Lie group and the Borel construction associated to an action of $S^{1}$ to $S^{2}$.

  4. On Dependability of Computing Systems

    Institute of Scientific and Technical Information of China (English)

    XU Shiyi

    1999-01-01

    With the rapid development and wideapplications of computing systems on which more reliance has been put, adependable system will be much more important than ever. This paper isfirst aimed at giving informal but precise definitions characterizingthe various attributes of dependability of computing systems and thenthe importance of (and the relationships among) all the attributes areexplained.Dependability is first introduced as a global concept which subsumes theusual attributes of reliability, availability, maintainability, safetyand security. The basic definitions given here are then commended andsupplemented by detailed material and additional explanations in thesubsequent sections.The presentation has been structured as follows so as to attract thereader's attention to the important attributions of dependability.* Search for a few number of concise concepts enabling thedependability attributes to be expressed as clearly as possible.* Use of terms which are identical or as close as possible tothose commonly used nowadays.This paper is also intended to provoke people's interest in designing adependable computing system.

  5. Longwall coal mining operations computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Roxborough, F.F.

    1982-01-01

    This research thesis provides the mining analyst with an effective means of experimentation with any mining layout. SIMCAL is a generalised simulation program suitable for investigating different models. The models are constructed by arranging elements called activities, equipment items, memories and branches. The branches allow any number of activities to occur simultaneously and therefore allow the construction of a model even for the most complex real world system. Reports of the analysis are produced in tabular form and can be generated on a shift to shift basis together with graphical displays. After describing the ideas and procedures inherent in SIMCAL, a bord and pillar model was constructed and tested. The same problem was also tested in simulation program COALSIM. The two programs were compared and the existing differences explained. An initial model for a longwall method of mining is discussed and several interesting variations of modelling possibilities listed. The complete listing of the main program SIMCAL and the plotting program SIMPLOT are supplied.

  6. A community-based study of asthenopia in computer operators

    Directory of Open Access Journals (Sweden)

    Bhanderi Dinesh

    2008-01-01

    Full Text Available Context: There is growing body of evidence that use of computers can adversely affect the visual health. Considering the rising number of computer users in India, computer-related asthenopia might take an epidemic form. In view of that, this study was undertaken to find out the magnitude of asthenopia in computer operators and its relationship with various personal and workplace factors. Aims: To study the prevalence of asthenopia among computer operators and its association with various epidemiological factors. Settings and Design: Community-based cross-sectional study of 419 subjects who work on computer for varying period of time. Materials and Methods: Four hundred forty computer operators working in different institutes were selected randomly. Twenty-one did not participate in the study, making the nonresponse rate 4.8%. Rest of the subjects (n = 419 were asked to fill a pre-tested questionnaire, after obtaining their verbal consent. Other relevant information was obtained by personal interview and inspection of workstation. Statistical Analysis Used: Simple proportions and Chi-square test. Results: Among the 419 subjects studied, 194 (46.3% suffered from asthenopia during or after work on computer. Marginally higher proportion of asthenopia was noted in females compared to males. Occurrence of asthenopia was significantly associated with age of starting use of computer, presence of refractive error, viewing distance, level of top of the computer screen with respect to eyes, use of antiglare screen and adjustment of contrast and brightness of monitor screen. Conclusions: Prevalence of asthenopia was noted to be quite high among computer operators, particularly in those who started its use at an early age. Individual as well as work-related factors were found to be predictive of asthenopia.

  7. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  8. Realization Proposal and Security Model of Operating System Based on Credible Computation%可信计算的操作系统安全模型及实现方案

    Institute of Scientific and Technical Information of China (English)

    潘大庆; 张爱科; 谢翠兰

    2011-01-01

    对可信计算平台的安全要素进行分析,深入研究了一种采用可信计算的原理进行操作系统运行中核心信息的完整性保护策略,以Windows Vista操作系统为例,阐述该操作系统对数据的完整性保护策略和可信环境的建立框架。根据可信操作系统中,数据完整性保护策略以基于根信任的安全机制和信任传递机制,从而建立一套完整的信息完整性保护系统。在此基础上,提出并实现了一个新型基于可信计算的操作系统信息完整性保护模型,并阐述了该模型的实现原理、组成结构和工作过程。%Through analysis security essential factor of the credible computation platform,this paper has researched one kind of information integrity protection strategy deeply which has used the credible computation principle for operating system running core information.Take the Windows Vista operating system as the example,elaborated the data integrity protection strategy and the credible environment establishment frame in this operating system in details.In credible operating system,data integrity protection strategy designed based on the root trust safety mechanism and trust transmission mechanism,and then established a set of complete information integrity protective system.Based on this,this paper has proposed and realized one new type operating system information integrity protection model based on the credible computation.Elaborated in detail this model realizes the principle,the composition structure and the work process.

  9. Reproducibility of neuroimaging analyses across operating systems.

    Science.gov (United States)

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  10. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  11. Computers in Information Sciences: On-Line Systems.

    Science.gov (United States)

    COMPUTERS, *BIBLIOGRAPHIES, *ONLINE SYSTEMS, * INFORMATION SCIENCES , DATA PROCESSING, DATA MANAGEMENT, COMPUTER PROGRAMMING, INFORMATION RETRIEVAL, COMPUTER GRAPHICS, DIGITAL COMPUTERS, ANALOG COMPUTERS.

  12. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  13. Adaptive Fuzzy Systems in Computational Intelligence

    Science.gov (United States)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  14. Discourse in Systemic Operational Design

    Science.gov (United States)

    2007-05-22

    influence of Foucault’s theories of power, particularly work earlier in his career. At one end of interpretation of Foucault , overarching and...omnipresent impersonal discourses do not allow individual agency, from a resistance point of view or otherwise.60 Another view is that Foucault acknowledges...influence of Foucault on discourse theory related to systemic operational design, it is helpful to look at three particular meanings he attributes to

  15. Cronus: A Distributed Operating System.

    Science.gov (United States)

    1983-11-01

    423 machine interface. -24- -- mass- Report No. 5086 Bolt Beranek and Newman Inc. standard operating systems (e.g., a Digital Equipment Corporation VAX...One from Ungermann-Bass, Inc. o ProNet from Proteon Associates -120- Report No. 5086 Bolt Beranek and Newman Inc. o PolyNet from Logica , Inc. o...configuration. Polynet from Logica , Inc. Polynet is a commercial version of the Cambridge University Ring Network that has become quite popular in the

  16. A Computuerized Operator Support System Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-11-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  17. A Computuerized Operator Support System Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-08-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  18. Time Warp Operating System, Version 2.5.1

    Science.gov (United States)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; hide

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  19. Time Warp Operating System, Version 2.5.1

    Science.gov (United States)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; Younger, Herbert C.

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  20. Simulation of Parallel Logical Operations with Biomolecular Computing

    Directory of Open Access Journals (Sweden)

    Mahnaz Kadkhoda

    2008-01-01

    Full Text Available Biomolecular computing is the computational method that uses the potential of DNA as a parallel computing device. DNA computing can be used to solve NP-complete problems. An appropriate application of DNA computation is large-scale evaluation of parallel computation models such as Boolean Circuits. In this study, we present a molecular-based algorithm for evaluation of Nand-based Boolean Circuits. The contribution of this paper is that the proposed algorithm has been implemented using only three molecular operations and the number of passes in each level is decreased to less than half of previously addressed in the literature. Thus, the proposed algorithm is much easier to implement in the laboratory.

  1. Operational success - Flat-plate photovoltaic systems

    Science.gov (United States)

    Risser, V. V.; Zwibel, H. S.

    The performance-to-date of 20 and 100 kW peak DOE photovoltaic array demonstration projects in New Mexico and Texas are reported. An El Paso 20 kW unit feeds power to an uninterruptible power supply for a computer controlling a 197 MW generator. System availability has been 97 percent after over 800 days of operation, and has reached monthly efficiencies of 5.3-6.2 percent. The Lovington, NM 100 kW unit has operated at an average 6.7 percent efficiency, furnishing over 15.8 MWh/mo for a 2 yr period. System availability has been 99 percent, although at increased costs due to regular maintenance.

  2. Operational Management System for Regulated Water Systems

    Science.gov (United States)

    van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.

    2012-04-01

    Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.

  3. Operating System Performance Analyzer for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Shahzada Khayyam Nisar

    2011-11-01

    Full Text Available RTOS provides a number of services to an embedded system designs such as case management, memory management, and Resource Management to build a program. Choosing the best OS for an embedded system is based on the available OS for system designers and their previous knowledge and experience. This can cause an imbalance between the OS and embedded systems. RTOS performance analysis is critical in the design and integration of embedded software to ensure that limits the application meet at runtime. To select an appropriate operating system for an embedded system for a particular application, the OS services to be analyzed. These OS services are identified by parameters to establish performance metrics. Performance Metrics selected include context switching, Preemption time and interrupt latency. Performance Metrics are analyzed to choose the right OS for an embedded system for a particular application.

  4. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  5. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  6. Operational research in weapon system

    Directory of Open Access Journals (Sweden)

    R. S. Varma

    1958-04-01

    Full Text Available "The paper is divided into three parts: (a The first part deals with what operational research is. (bThe second part gives what we mean by Weapon Systems and discusses considerations that determine the choice of a particular weapon system from a class weapon systems. (cThe third part deals with some aspects of weapon replacement policy.The effectiveness of a weapon system is defined as E=D/C where E is weapon effectiveness (a comparative figure of merit; D is total damage inflicted or prevented and C is total cost, D and C being reduced to common dimensions. During the course of investigations, criteria regarding to choice of weapon or weapons from a set of weapon systems are established through production function and military effect curves. A procedure is described which maximizes the expectation of military utility in order to select a weapon system from the class of weapon systems. This is done under the following simplifying assumptions: (a Non- decreasing utility function; (b Constant average cost for each kind of weapons; and (c Independence of the performance of each unit of weapon. Some of the difficulties which arises when any of these restrictions is relaxed are briefly mentioned. Finally, the policy of weapon replacement and the factors governing the same are described."

  7. Aging and computational systems biology.

    Science.gov (United States)

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T

    2016-01-01

    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  8. Computational Systems for Multidisciplinary Applications

    Science.gov (United States)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  9. The human-computer interaction design of self-operated mobile telemedicine devices

    OpenAIRE

    Zheng, Shaoqing

    2015-01-01

    Human-computer interaction (HCI) is an important issue in the area of medicine, for example, the operation of surgical simulators, virtual rehabilitation systems, telemedicine treatments, and so on. In this thesis, the human-computer interaction of a self-operated mobile telemedicine device is designed. The mobile telemedicine device (i.e. intelligent Medication Box or iMedBox) is used for remotely monitoring patient health and activity information such as ECG (electrocardiogram) signals, hom...

  10. Towards molecular computers that operate in a biological environment

    Science.gov (United States)

    Kahan, Maya; Gil, Binyamin; Adar, Rivka; Shapiro, Ehud

    2008-07-01

    important consequences when performed in a proper context. We envision that molecular computers that operate in a biological environment can be the basis of “smart drugs”, which are potent drugs that activate only if certain environmental conditions hold. These conditions could include abnormalities in the molecular composition of the biological environment that are indicative of a particular disease. Here we review the research direction that set this vision and attempts to realize it.

  11. ATLAS distributed computing operations in the GridKa cloud

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Serfon, Cedric; Walker, Rodney [Ludwig-Maximilians-Universitaet, Garching (Germany); Harenberg, Torsten; Kalinin, Sergey; Schultes, Joachim [Bergische Universitaet, Wuppertal (Germany); Kawamura, Gen [Johannes-Gutenberg-Universitaet, Mainz (Germany); Leffhalm, Kai [DESY, Zeuthen (Germany); Meyer, Joerg [Georg-August-Universitaet, Goettingen (Germany); Petzold, Andreas [Karlsruher Institut fuer Technologie (Germany); Sundermann, Jan Erik [Albert-Ludwigs-Universitaet, Freiburg (Germany)

    2011-07-01

    The ATLAS Grid Computing resources in Germany, Poland, the Czech Republic, Austria, and Switzerland consist of a cloud of 12 Tier-2 computing centers grouped around the Tier-1 center GridKa at the Steinbuch Centre for Computing at KIT. While the Tier-1 center serves as a hub for data management in the cloud and is the principal resource for reprocessing and custodial storage of raw ATLAS data, the Tier-2 centers provide the resources for user analysis and production of simulated events. During the first full year of data taking at the LHC, the GridKa cloud has successfully contributed to the overall ATLAS computing effort, enabling physicists to quickly analyze the large volume of new incoming data and the corresponding simulated events. This talk covers the computing operations in the GridKa cloud with focus on performance and experiences at both the Tier-1 and Tier-2 centers.

  12. Comparison of Windows and Linux Operating Systems in Advanced Features

    Directory of Open Access Journals (Sweden)

    P. Abhilash

    2015-02-01

    Full Text Available Comparison between the Microsoft Windows and Linux computer operating systems is a long-running discussion topic within the personal computer industry .This technical paper is mainly going to focus on the differences between windows and linux in all fields. Both Windows and Linux Operating systems have their own advantages and differ in functionalities and user friendliness. Linux and Microsoft Windows differ in philosophy, cost, versatility and stability, with each seeking to improve in their perceived weaker areas. This paper is mainly going to focus on the advanced features that are uniquely present in one operating system and not in other one.

  13. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  14. Redundant computing for exascale systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

    2010-12-01

    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  15. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    Science.gov (United States)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the

  16. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    Science.gov (United States)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the

  17. Fault tolerant hypercube computer system architecture

    Science.gov (United States)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  18. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold

    2013-01-01

    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  19. Smart Cards and Card Operating Systems

    NARCIS (Netherlands)

    Hartel, Pieter H.; Bartlett, J.; de Jong, Eduard K.

    The operating system of an IC card should provide an appropriate interface to applications using IC cards. An incorrect choice of operations and data renders the card inefficient and cumbersome. The design principles of the UNIX operating system are most appropriate for IC card operating system

  20. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  1. Computational Modeling in Support of the National Ignition Facilty Operations

    CERN Document Server

    Shaw, M J; Haynam, C A; Williams, W H

    2001-01-01

    Numerical simulation of the National Ignition Facility (NIF) laser performance and automated control of the laser setup process are crucial to the project's success. These functions will be performed by two closely coupled computer code: the virtual beamline (VBL) and the laser performance operations model (LPOM).

  2. Computational Modeling in Support of National Ignition Facility Operations

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M J; Sacks, R A; Haynam, C A; Williams, W H

    2001-10-23

    Numerical simulation of the National Ignition Facility (NIF) laser performance and automated control of laser setup process are crucial to the project's success. These functions will be performed by two closely coupled computer codes: the virtual beamline (VBL) and the laser operations performance model (LPOM).

  3. CMS Monte Carlo production operations in a distributed computing environment

    CERN Document Server

    Mohapatra, A; Khomich, A; Lazaridis, C; Hernández, J M; Caballero, J; Hof, C; Kalinin, S; Flossdorf, A; Abbrescia, M; De Filippis, N; Donvito, G; Maggi, G; My, S; Pompili, A; Sarkar, S; Maes, J; Van Mulders, P; Villella, I; De Weirdt, S; Hammad, G; Wakefield, S; Guan, W; Lajas, J A S; Elmer, P; Evans, D; Fanfani, A; Bacchi, W; Codispoti, G; Van Lingen, F; Kavka, C; Eulisse, G

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  4. Advanced Space Surface Systems Operations

    Science.gov (United States)

    Huffaker, Zachary Lynn; Mueller, Robert P.

    2014-01-01

    The importance of advanced surface systems is becoming increasingly relevant in the modern age of space technology. Specifically, projects pursued by the Granular Mechanics and Regolith Operations (GMRO) Lab are unparalleled in the field of planetary resourcefulness. This internship opportunity involved projects that support properly utilizing natural resources from other celestial bodies. Beginning with the tele-robotic workstation, mechanical upgrades were necessary to consider for specific portions of the workstation consoles and successfully designed in concept. This would provide more means for innovation and creativity concerning advanced robotic operations. Project RASSOR is a regolith excavator robot whose primary objective is to mine, store, and dump regolith efficiently on other planetary surfaces. Mechanical adjustments were made to improve this robot's functionality, although there were some minor system changes left to perform before the opportunity ended. On the topic of excavator robots, the notes taken by the GMRO staff during the 2013 and 2014 Robotic Mining Competitions were effectively organized and analyzed for logistical purposes. Lessons learned from these annual competitions at Kennedy Space Center are greatly influential to the GMRO engineers and roboticists. Another project that GMRO staff support is Project Morpheus. Support for this project included successfully producing mathematical models of the eroded landing pad surface for the vertical testbed vehicle to predict a timeline for pad reparation. And finally, the last project this opportunity made contribution to was Project Neo, a project exterior to GMRO Lab projects, which focuses on rocket propulsion systems. Additions were successfully installed to the support structure of an original vertical testbed rocket engine, thus making progress towards futuristic test firings in which data will be analyzed by students affiliated with Rocket University. Each project will be explained in

  5. Man-Computer Interactive Data Access System (McIDAS). Continued development of McIDAS and operation in the GARP Atlantic tropical experiment

    Science.gov (United States)

    Suomi, V. E.

    1975-01-01

    The complete output of the Synchronous Meteorological Satellite was recorded on one inch magnetic tape. A quality control subsystem tests cloud track vectors against four sets of criteria: (1) rejection if best match occurs on correlation boundary; (2) rejection if major correlation peak is not distinct and significantly greater than secondary peak; (3) rejection if correlation is not persistent; and (4) rejection if acceleration is too great. A cloud height program determines cloud optical thickness from visible data and computer infrared emissivity. From infrared data and temperature profile, cloud height is determined. A functional description and electronic schematics of equipment are given.

  6. Factory automation management computer system and its applications. FA kanri computer system no tekiyo jirei

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, M. (Meidensha Corp., Tokyo (Japan))

    1993-06-11

    A plurality of NC composite lathes used in a breaker manufacturing and processing line were integrated under a system mainly comprising the industrial computer [mu] PORT, an exclusive LAN, and material handling robots. This paper describes this flexible manufacturing system (FMS) that operates on an unmanned basis from process control to material distribution and processing. This system has achieved the following results: efficiency improvement in lines producing a great variety of products in small quantities and in mixed flow production lines enhancement in facility operating rates by means of group management of NC machine tools; orientation to developing into integrated production systems; expansion of processing capacity; reduction in number of processes; and reduction in management and indirect manpowers. This system allocates the production control plans transmitted from the production control system operated by a host computer to the processes on a daily basis and by machines, using the [mu] PORT. This FMS utilizes features of the multi-task processing function of the [mu] PORT and the ultra high-speed real-time-based BASIC. The system processes simultaneously the process management such as machining programs and processing results, the processing data management, and the operation control of a plurality of machines. The system achieved systematized machining processes. 6 figs., 2 tabs.

  7. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  8. 基于云计算和WebGIS的铁路运营监控系统的设计与实现%Design and Implementation of Railway Operation Monitoring System Based on Cloud Computing and WebGIS

    Institute of Scientific and Technical Information of China (English)

    闫璐; 胥昊; 郭奇园

    2012-01-01

    Cloud computing is used to construct railway operation monitoring system which is based on WebGIS. The system includes three modes; railway operation monitoring infrastructure as a service, railway operation monitoring platform as a service and railway operation monitoring software as a service. Railway operation monitoring infrastructure as a service provides users with hardware resource and computing resource through cloud platform infrastructure constituted by system devices which is based on virtu-alization technology. Railway operation monitoring platform as a service provides users with application service platform such as business applications and GIS applications services. Railway operation monitoring software as a service provides users with application software resources through deploying application software on application servers. The keys of system construction are development and realization of application service, publishing and calling of GIS application service, client development, realization and calling of software interface. After system development, functional verification and test results show that putting such complicated applications as business processing, information conversion and sharing as well as security management on the cloud platform in the form of service can achieve centralized management and customized demand for the mass information of the railway system, and can enhance information security and maintainability.%采用云计算技术、基于WebGIS构建铁路运营监控系统.该系统包括铁路运营监控基础设施即服务、铁路运营监控平台即服务、铁路运营监控软件即服务3种服务模式.铁路运营监控基础设施即服务通过虚拟化技术将系统设备组成云平台基础设施,为用户提供硬件资源和计算资源.铁路运营监控平台即服务将应用服务平台以业务应用、GIS应用服务等方式提供给用户.铁路运营监控软件即服务将应用软件统一

  9. Thermoelectric property measurements with computer controlled systems

    Science.gov (United States)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  10. Checkpoint triggering in a computer system

    Science.gov (United States)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  11. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  12. A Multi-parameter Data Acquisition and Analysis System Based on Open VMS Operation System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We fall into disuse the JUHU system based on the PDP/11 computer for RSX11M operation system,and update our multi-parameter data aequisition and analysis system’s hardware and software based on VAXII computer and Open VMS operation system,make it adapted to the recent physics experiment.In this paper,we describe the updated inulti-parameter data acquisition and analysis system’s hardware,software configuration and system function.

  13. Prognostic Analysis System and Methods of Operation

    Science.gov (United States)

    MacKey, Ryan M. E. (Inventor); Sneddon, Robert (Inventor)

    2014-01-01

    A prognostic analysis system and methods of operating the system are provided. In particular, a prognostic analysis system for the analysis of physical system health applicable to mechanical, electrical, chemical and optical systems and methods of operating the system are described herein.

  14. The Linux operating system: An introduction

    Science.gov (United States)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  15. SELF LEARNING COMPUTER TROUBLESHOOTING EXPERT SYSTEM

    OpenAIRE

    Amanuel Ayde Ergado

    2016-01-01

    In computer domain the professionals were limited in number but the numbers of institutions looking for computer professionals were high. The aim of this study is developing self learning expert system which is providing troubleshooting information about problems occurred in the computer system for the information and communication technology technicians and computer users to solve problems effectively and efficiently to utilize computer and computer related resources. Domain know...

  16. Computer-aided operational management system for process control in Eems central combined cycle power station/The Netherlands; Rechnergestuetztes Betriebsmanagementsystem fuer das Prozesscontrolling im Kombi-Kraftwerk Eemscentrale/Niederlande

    Energy Technology Data Exchange (ETDEWEB)

    Helmich, P. [Elsag Bailey Hartman und Braun, Minden (Germany); Barends, H.W.M.D. [EPON, Zwolle (Netherlands)

    1997-06-01

    The operational management system supports the operating managers on site and in the headquarters of the undertaking in the following tasks: In the evaluation of process data, important process factors and plausibility are tested; the criterion for plausibility is the attainment of mass and energy balances which are lodged in a computer model. Actual assessment parameters, such as efficiency, are compared with the prescribed reference parameters which are the maximum attainable in the actual operating situation. Finally, the automatic balancing of production and consumption data, income and energy costs takes place within the framework of a profit and loss balance. (orig.) [Deutsch] Das Betriebsmanagementsystem unterstuetzt die Betriebsfuehrung vor Ort und in der Unternehmenszentrale bei folgenden Aufgaben. Bei der Prozessdatenbewertung werden wichtige Prozessgroessen auf Plausibilitaet geprueft; Kriterium fuer die Plausibilitaet ist die Erfuellung von Massen- und Energiebilanzen, die in einem Rechenmodell hinterlegt werden. Ist-Bewertungsparameter, z.B. Wirkungsgrade, werden den in der aktuellen Betriebssituation maximal erreichbaren Soll-Parametern gegenuebergestellt. Im Rahmen einer Gewinn- und Verlustbilanzierung findet schliesslich die automatische Bilanzierung von Produktions- und Verbrauchsdaten, Erloesen und Energiekosten statt. (orig.)

  17. Computing three-point functions for short operators

    Energy Technology Data Exchange (ETDEWEB)

    Bargheer, Till [School of Natural Sciences, The Institute for Advanced Study,Einstein Drive, Princeton, NJ 08540 (United States); DESY Theory Group, DESY Hamburg,Notkestraße 85, D-22603 Hamburg (Germany); Minahan, Joseph A.; Pereira, Raul [Department of Physics and Astronomy, Uppsala University,Box 520, SE-751 20 Uppsala (Sweden)

    2014-03-21

    We compute the three-point structure constants for short primary operators of N=4 super Yang-Mills theory to leading order in 1/√λ by mapping the problem to a flat-space string theory calculation. We check the validity of our procedure by comparing to known results for three chiral primaries. We then compute the three-point functions for any combination of chiral and non-chiral primaries, with the non-chiral primaries all dual to string states at the first massive level. Along the way we find many cancellations that leave us with simple expressions, suggesting that integrability is playing an important role.

  18. Computing three-point functions for short operators

    Energy Technology Data Exchange (ETDEWEB)

    Bargheer, Till [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Institute for Advanced Study, Princeton, NJ (United States). School of Natural Sciences; Minahan, Joseph A.; Pereira, Raul [Uppsala Univ. (Sweden). Dept. of Physics and Astronomy

    2013-11-15

    We compute the three-point structure constants for short primary operators of N=4 super Yang.Mills theory to leading order in 1/√(λ) by mapping the problem to a flat-space string theory calculation. We check the validity of our procedure by comparing to known results for three chiral primaries. We then compute the three-point functions for any combination of chiral and non-chiral primaries, with the non-chiral primaries all dual to string states at the first massive level. Along the way we find many cancellations that leave us with simple expressions, suggesting that integrability is playing an important role.

  19. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  20. Drip and Mate Operations Acting in Test Tube Systems and Tissue-like P systems

    CERN Document Server

    Freund, Rudolf; 10.4204/EPTCS.11.8

    2009-01-01

    The operations drip and mate considered in (mem)brane computing resemble the operations cut and recombination well known from DNA computing. We here consider sets of vesicles with multisets of objects on their outside membrane interacting by drip and mate in two different setups: in test tube systems, the vesicles may pass from one tube to another one provided they fulfill specific constraints; in tissue-like P systems, the vesicles are immediately passed to specified cells after having undergone a drip or mate operation. In both variants, computational completeness can be obtained, yet with different constraints for the drip and mate operations.

  1. Measurement of SIFT operating system overhead

    Science.gov (United States)

    Palumbo, D. L.; Butler, R. W.

    1985-01-01

    The overhead of the software implemented fault tolerance (SIFT) operating system was measured. Several versions of the operating system evolved. Each version represents different strategies employed to improve the measured performance. Three of these versions are analyzed. The internal data structures of the operating systems are discussed. The overhead of the SIFT operating system was found to be of two types: vote overhead and executive task overhead. Both types of overhead were found to be significant in all versions of the system. Improvements substantially reduced this overhead; even with these improvements, the operating system consumed well over 50% of the available processing time.

  2. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  3. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  4. European Flood Awareness System - now operational

    Science.gov (United States)

    Alionte Eklund, Cristina.; Hazlinger, Michal; Sprokkereef, Eric; Garcia Padilla, Mercedes; Garcia, Rafael J.; Thielen, Jutta; Salamon, Peter; Pappenberger, Florian

    2013-04-01

    EFAS Computational centre - European Centre for Medium-Range Weather Forecasts - will be running the forecasts, post-processing and operating the EFAS-Information System platform • EFAS Dissemination centre—Swedish Meteorological and Hydrological Institute, Slovak Hydrometeorological Institute and Rijkswaterstaat Waterdienst (the Netherlands)—analyse the results on a daily basis, assess the situation, and disseminate information to the EFAS partners The European Commission is responsible for contract management. The Joint Research Centre further provides support for EFAS through research and development. Aims of EFAS operational • added value early flood forecasting products to hydrological services • unique overview products of ongoing and forecast floods in Europe more than 3 days in advance • create a European network of operational hydrological services

  5. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  8. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  9. Which computational mechanisms operate in the hippocampus during novelty detection?

    Science.gov (United States)

    Kumaran, Dharshan; Maguire, Eleanor A

    2007-01-01

    A fundamental property of adaptive behavior is the ability to rapidly distinguish what is novel from what is familiar in our environment. Empirical evidence and computational work have provided biologically plausible models of the neural substrate and mechanisms underlying the coding of stimulus novelty in the perirhinal cortex. In this article, we highlight the importance of a different category of novelty, namely associative novelty, which has received relatively little attention, despite its clear ecological importance. While previous studies in both animals and humans have documented hippocampal responses in relation to associative novelty, a key issue concerning the computations underlying these novelty signals has not been previously addressed. We argue that this question has importance not only for our understanding of novelty processing, but also for advancing our knowledge of the fundamental computational operations performed by the hippocampus. We suggest a different approach to this problem, and discuss recent evidence supporting the hypothesis that the hippocampus operates as a comparator during the processing of associative novelty, generating mismatch/novelty signals when prior predictions are violated by sensory reality. We also draw on conceptual similarities between associative novelty and contextual novelty to suggest that empirical findings from these two seemingly distant research fields accord with the operation of a comparator mechanism during novelty detection more generally. We therefore conclude that a comparator mechanism may underlie the role of the hippocampus not only in detecting occurrences that are unexpected given specific associatively retrieved predictions, but also events that violate more abstract properties of the experimental context.

  10. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  11. Advanced Transport Operating System (ATOPS) utility library software description

    Science.gov (United States)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  12. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  13. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  14. Ergonomic intervention for improving work postures during notebook computer operation.

    Science.gov (United States)

    Jamjumrus, Nuchrawee; Nanthavanij, Suebsak

    2008-06-01

    This paper discusses the application of analytical algorithms to determine necessary adjustments for operating notebook computers (NBCs) and workstations so that NBC users can assume correct work postures during NBC operation. Twenty-two NBC users (eleven males and eleven females) were asked to operate their NBCs according to their normal work practice. Photographs of their work postures were taken and analyzed using the Rapid Upper Limb Assessment (RULA) technique. The algorithms were then employed to determine recommended adjustments for their NBCs and workstations. After implementing the necessary adjustments, the NBC users were then re-seated at their workstations, and photographs of their work postures were re-taken, to perform the posture analysis. The results show that the NBC users' work postures are improved when their NBCs and workstations are adjusted according to the recommendations. The effectiveness of ergonomic intervention is verified both visually and objectively.

  15. Parallelizing Sylvester-like operations on a distributed memory computer

    Energy Technology Data Exchange (ETDEWEB)

    Hu, D.Y.; Sorensen, D.C. [Rice Univ., Houston, TX (United States)

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  16. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    Science.gov (United States)

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  17. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    Science.gov (United States)

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  18. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Ulrich, Thomas Anthony [Idaho National Laboratory; Lew, Roger Thomas [Idaho National Laboratory

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  19. TMX-U computer system in evolution

    Science.gov (United States)

    Casper, T. A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-08-01

    Over the past three years, the total TMX-U diagnostic data base has grown to exceed 10 Mbytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 min per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the parallel acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate.

  20. Distributed computing operations in the German ATLAS cloud

    Energy Technology Data Exchange (ETDEWEB)

    Boehler, Michael; Gamel, Anton; Sundermann, Jan Erik [Universitaet Freiburg, Freiburg im Breisgau (Germany); Petzold, Andreas [KIT, Karlsruhe (Germany); Kawamura, Gen [Universitaet Mainz (Germany); Leffhalm, Kai [DESY (Germany); Sandhoff, Marisa; Harenberg, Torsten [Bergische Universitaet Wuppertal (Germany); Walker, Rod; Duckeck, Guenter [LMU Muenchen (Germany)

    2013-07-01

    Before announcing the discovery of a Higgs-like boson at the 4th of July 2012 a huge amount of data had to be distributed around the world and analysed. Moreover, to have well optimised analyses with solid background estimates, Monte Carlo simulated event samples needed to be generated. All of this, data distribution, Monte Carlo production, and also data reprocessing, is performed by the Worldwide LHC Computing Grid. The ATLAS grid computing resources in Austria, the Czech Republic, Germany, Poland, and Switzerland are organized in the GridKa cloud which is one out of 10 ATLAS computing clouds. It consists of the Tier-1 centre at KIT in Karlsruhe which serves as a hub for data management and stores raw ATLAS data and the Tier-2 centres that provide the resources for user analysis and Monte Carlo samples production. This talk gives an overview of the ATLAS grid computing operations in 2012 focusing on the performance and experiences at both the Tier-1 and Tier-2 centres and it summarises the prospects and requirements for grid computing during and after the long shut-down of the LHC in 2013/2014.

  1. CARMENES instrument control system and operational scheduler

    Science.gov (United States)

    Garcia-Piquer, Alvaro; Guàrdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar

    2014-07-01

    The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target

  2. The Computer-Aided Analytic Process Model. Operations Handbook for the APM (Analytic Process Model) Demonstration Package. Appendix

    Science.gov (United States)

    1986-01-01

    The Analytic Process Model for System Design and Measurement: A Computer-Aided Tool for Analyzing Training Systems and Other Human-Machine Systems. A...separate companion volume--The Computer-Aided Analytic Process Model : Operations Handbook for the APM Demonstration Package is also available under

  3. Possibilities of making use of integrated emission monitoring system for inspection of electroprecipitators operation

    Energy Technology Data Exchange (ETDEWEB)

    Knitter, J.; Matys, T. [Gdansk Thermal-Electric Power Station Complex S.A., Gdansk (Poland)

    1997-12-31

    Microprocessor and computer control devices for ESP operation and integrated emission monitoring systems are discussed. An operating ESP inspection system for the Gdansk power station complex is determined. 8 refs., 1 fig.

  4. Using Expert Systems For Computational Tasks

    Science.gov (United States)

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.

    1990-01-01

    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  5. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  6. Computer Aided Control System Design (CACSD)

    Science.gov (United States)

    Stoner, Frank T.

    1993-01-01

    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  7. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    Science.gov (United States)

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  8. New Human-Computer Interface Concepts for Mission Operations

    Science.gov (United States)

    Fox, Jeffrey A.; Hoxie, Mary Sue; Gillen, Dave; Parkinson, Christopher; Breed, Julie; Nickens, Stephanie; Baitinger, Mick

    2000-01-01

    The current climate of budget cuts has forced the space mission operations community to reconsider how it does business. Gone are the days of building one-of-kind control centers with teams of controllers working in shifts 24 hours per day, 7 days per week. Increasingly, automation is used to significantly reduce staffing needs. In some cases, missions are moving towards lights-out operations where the ground system is run semi-autonomously. On-call operators are brought in only to resolve anomalies. Some operations concepts also call for smaller operations teams to manage an entire family of spacecraft. In the not too distant future, a skeleton crew of full-time general knowledge operators will oversee the operations of large constellations of small spacecraft, while geographically distributed specialists will be assigned to emergency response teams based on their expertise. As the operations paradigms change, so too must the tools to support the mission operations team's tasks. Tools need to be built not only to automate routine tasks, but also to communicate varying types of information to the part-time, generalist, or on-call operators and specialists more effectively. Thus, the proper design of a system's user-system interface (USI) becomes even more importance than before. Also, because the users will be accessing these systems from various locations (e.g., control center, home, on the road) via different devices with varying display capabilities (e.g., workstations, home PCs, PDAS, pagers) over connections with various bandwidths (e.g., dial-up 56k, wireless 9.6k), the same software must have different USIs to support the different types of users, their equipment, and their environments. In other words, the software must now adapt to the needs of the users! This paper will focus on the needs and the challenges of designing USIs for mission operations. After providing a general discussion of these challenges, the paper will focus on the current efforts of

  9. Automated fermentation equipment. 2. Computer-fermentor system

    Energy Technology Data Exchange (ETDEWEB)

    Nyeste, L.; Szigeti, L.; Veres, A.; Pungor, E. Jr.; Kurucz, I.; Hollo, J.

    1981-02-01

    An inexpensive computer-operated system suitable for data collection and steady-state optimum control of fermentation processes is presented. With this system, minimum generation time has been determined as a function of temperature and pH in the turbidostat cultivation of a yeast strain. The applicability of the computer-fermentor system is also presented by the determination of the dynamic Kla value.

  10. Operational mesoscale atmospheric dispersion prediction using a parallel computing cluster

    Indian Academy of Sciences (India)

    C V Srinivas; R Venkatesan; N V Muralidharan; Someshwar Das; Hari Dass; P Eswara Kumar

    2006-06-01

    An operational atmospheric dispersion prediction system is implemented on a cluster supercomputer for Online Emergency Response at the Kalpakkam nuclear site.This numerical system constitutes a parallel version of a nested grid meso-scale meteorological model MM5 coupled to a random walk particle dispersion model FLEXPART.The system provides 48-hour forecast of the local weather and radioactive plume dispersion due to hypothetical airborne releases in a range of 100 km around the site.The parallel code was implemented on different cluster con figurations like distributed and shared memory systems.A 16-node dual Xeon distributed memory gigabit ethernet cluster has been found sufficient for operational applications.The runtime of a triple nested domain MM5 is about 4 h for a 24 h forecast.The system had been operated continuously for a few months and results were ported on the IMSc home page. Initial and periodic boundary condition data for MM5 are provided by NCMRWF,New Delhi. An alternative source is found to be NCEP,USA.These two sources provide the input data to the operational models at different spatial and temporal resolutions using different assimilation methods.A comparative study on the results of forecast is presented using these two data sources for present operational use.Improvement is noticed in rainfall forecasts that used NCEP data, probably because of its high spatial and temporal resolution.

  11. Joint Operational Medicine Information Systems (JOMIS)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Joint Operational Medicine Information Systems (JOMIS) Defense Acquisition Management...August 24, 2015 Program Information Program Name Joint Operational Medicine Information Systems (JOMIS) DoD Component DoD The acquiring DoD...Defense’s (DoD’s) operational medicine information systems by fielding the DoD Modernized Electronic Health Record (EHR) solution while developing and

  12. STUDY ON HUMAN-COMPUTER SYSTEM FOR STABLE VIRTUAL DISASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    Guan Qiang; Zhang Shensheng; Liu Jihong; Cao Pengbing; Zhong Yifang

    2003-01-01

    The cooperative work between human being and computer based on virtual reality (VR) is investigated to plan the disassembly sequences more efficiently. A three-layer model of human-computer cooperative virtual disassembly is built, and the corresponding human-computer system for stable virtual disassembly is developed. In this system, an immersive and interactive virtual disassembly environment has been created to provide planners with a more visual working scene. For cooperative disassembly, an intelligent module of stability analysis of disassembly operations is embedded into the human-computer system to assist planners to implement disassembly tasks better. The supporting matrix for stability analysis of disassembly operations is defined and the method of stability analysis is detailed. Based on the approach, the stability of any disassembly operation can be analyzed to instruct the manual virtual disassembly. At last, a disassembly case in the virtual environment is given to prove the validity of above ideas.

  13. No Address Space Operating System Prototype and Its Performance Test

    Institute of Scientific and Technical Information of China (English)

    LIU Fuyan; YOU Jinyuan

    2001-01-01

    In this paper,we first analyze datastorage models in typical operating systems,the re-lation between distributed shared memory and datastorage model,as well as the relation between mem-ory hierarchy and data storage model.Then we pro-pose the concept of No Address Space Operating Sys-tem,discuss an implementation prototype,and ana-lyze its performance and advantages.We believe thatthe concept of process virtual address space should beabandoned in operating systems,instructions shouldaccess files directly,and processes should run on files.Compared with other operating systems,No AddressSpace Operating System has many advantages andshould be adopted in computer systems.

  14. On the computational implementation of forward and back-projection operations for cone-beam computed tomography.

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab

    2016-08-01

    Forward- and back-projection operations are the main computational burden in iterative image reconstruction in computed tomography. In addition, their implementation has to be accurate to ensure stable convergence to a high-quality image. This paper reviews and compares some of the variations in the implementation of these operations in cone-beam computed tomography. We compare four algorithms for computing the system matrix, including a distance-driven algorithm, an algorithm based on cubic basis functions, another based on spherically symmetric basis functions, and a voxel-driven algorithm. The focus of our study is on understanding how the choice of the implementation of the system matrix will influence the performance of iterative image reconstruction algorithms, including such factors as the noise strength and spatial resolution in the reconstructed image. Our experiments with simulated and real cone-beam data reveal the significance of the speed-accuracy trade-off in the implementation of the system matrix. Our results suggest that fast convergence of iterative image reconstruction methods requires accurate implementation of forward- and back-projection operations, involving a direct estimation of the convolution of the footprint of the voxel basis function with the surface of the detectors. The required accuracy decreases by increasing the resolution of the projection measurements beyond the resolution of the reconstructed image. Moreover, reconstruction of low-contrast objects needs more accurate implementation of these operations. Our results also show that, compared with regularized reconstruction methods, the behavior of iterative reconstruction algorithms that do not use a proper regularization is influenced more significantly by the implementation of the forward- and back-projection operations.

  15. Local operation on identical particles systems

    CERN Document Server

    Rendón, O

    2011-01-01

    We describe identical particles through their extrinsic physical properties and operationally with an operator of selective measure Mm. The operator Mm is formed through non-symmetrized tensor product of one-particle measurement operators, so that it does not commute with the permutation operator P. This operator of selective measure Mm is a local operation (LO) if it acts on physical systems of distinguishable particles, but when Mm acts on the Hilbert sub-space of a system with a constant number of indistinguishable particles this can generate entanglement in the system. We will call entanglement by measurement (EbM), this way of producing entanglement. In this framework, we show entangle production examples for systems of two fermions (or bosons) when the operator Mm has two-particle events that are not mutually exclusive.

  16. Impact of new computing systems on finite element computations

    Science.gov (United States)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  17. Transient Faults in Computer Systems

    Science.gov (United States)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  18. Multiaxis, Lightweight, Computer-Controlled Exercise System

    Science.gov (United States)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  19. Computing Architecture of the ALICE Detector Control System

    CERN Document Server

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S

    2011-01-01

    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  20. Analysis and Synthesis of Delta Operator Systems

    CERN Document Server

    Yang, Hongjiu; Shi, Peng; Zhao, Ling

    2012-01-01

    This book is devoted to analysis and design on delta operator systems. When sampling is fast, a dynamical system will become difficult to control, which can be seen in wide real world applications. Delta operator approach is very effective to deal with fast sampling systems. Moreover, it is easy to observe and analyze the control effect with different sampling periods in delta operator systems. The framework of this book has been carefully constructed for delta operator systems to handle sliding mode control, time delays, filter design, finite frequency and networked control. These problems indeed are especially important and significant in automation and control systems design. Through the clear framework of the book, readers can easily go through the learning process on delta operator systems via a precise and comfortable learning sequence. Following this enjoyable trail, readers will come out knowing how to use delta operator approach to deal with control problems under fast sampling case. This book should...

  1. Operating systems and network protocols for wireless sensor networks.

    Science.gov (United States)

    Dutta, Prabal; Dunkels, Adam

    2012-01-13

    Sensor network protocols exist to satisfy the communication needs of diverse applications, including data collection, event detection, target tracking and control. Network protocols to enable these services are constrained by the extreme resource scarcity of sensor nodes-including energy, computing, communications and storage-which must be carefully managed and multiplexed by the operating system. These challenges have led to new protocols and operating systems that are efficient in their energy consumption, careful in their computational needs and miserly in their memory footprints, all while discovering neighbours, forming networks, delivering data and correcting failures.

  2. DOC-a file system cache to support mobile computers

    Science.gov (United States)

    Huizinga, D. M.; Heflinger, K.

    1995-09-01

    This paper identifies design requirements of system-level support for mobile computing in small form-factor battery-powered portable computers and describes their implementation in DOC (Disconnected Operation Cache). DOC is a three-level client caching system designed and implemented to allow mobile clients to transition between connected, partially disconnected and fully disconnected modes of operation with minimal user involvement. Implemented for notebook computers, DOC addresses not only typical issues of mobile elements such as resource scarcity and fluctuations in service quality but also deals with the pitfalls of MS-DOS, the operating system which prevails in the commercial notebook market. Our experiments performed in the software engineering environment of AST Research indicate not only considerable performance gains for connected and partially disconnected modes of DOC, but also the successful operation of the disconnected mode.

  3. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    Science.gov (United States)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  4. OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM

    Directory of Open Access Journals (Sweden)

    Nesterov G. D.

    2016-03-01

    Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article

  5. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  6. Computerized Operator Support System – Phase II Development

    Energy Technology Data Exchange (ETDEWEB)

    Ulrich, Thomas A.; Boring, Ronald L.; Lew, Roger T.; Thomas, Kenneth D.

    2015-02-01

    A computerized operator support system (COSS) prototype for nuclear control room process control is proposed and discussed. The COSS aids operators in addressing rapid plant upsets that would otherwise result in the shutdown of the power plant and interrupt electrical power generation, representing significant costs to the owning utility. In its current stage of development the prototype demonstrates four advanced functions operators can use to more efficiently monitor and control the plant. These advanced functions consist of: (1) a synthesized and intuitive high level overview display of system components and interrelations, (2) an enthalpy-based mathematical chemical and volume control system (CVCS) model to detect and diagnose component failures, (3) recommended strategies to mitigate component failure effects and return the plant back to pre-fault status, and (4) computer-based procedures to walk the operator through the recommended mitigation actions. The COSS was demonstrated to a group of operators and their feedback was collected. The operators responded positively to the COSS capabilities and features and indicated the system would be an effective operator aid. The operators also suggested several additional features and capabilities for the next iteration of development. Future versions of the COSS prototype will include additional plant systems, flexible computer-based procedure presentation formats, and support for simultaneous component fault diagnosis and dual fault synergistic mitigation action strategies to more efficiently arrest any plant upsets.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  8. Linear systems and operators in Hilbert space

    CERN Document Server

    Fuhrmann, Paul A

    2014-01-01

    A treatment of system theory within the context of finite dimensional spaces, this text is appropriate for students with no previous experience of operator theory. The three-part approach, with notes and references for each section, covers linear algebra and finite dimensional systems, operators in Hilbert space, and linear systems in Hilbert space. 1981 edition.

  9. 47 CFR 32.2220 - Operator systems.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Operator systems. 32.2220 Section 32.2220 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2220 Operator systems...

  10. Modeling human operator involvement in robotic systems

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1991-01-01

    A modeling approach is presented to describe complex manned robotic systems. The robotic system is modeled as a (highly) nonlinear, possibly time-varying dynamic system including any time delays in terms of optimal estimation, control and decision theory. The role of the human operator(s) is modeled

  11. Computer vision for dual spacecraft proximity operations -- A feasibility study

    Science.gov (United States)

    Stich, Melanie Katherine

    A computer vision-based navigation feasibility study consisting of two navigation algorithms is presented to determine whether computer vision can be used to safely navigate a small semi-autonomous inspection satellite in proximity to the International Space Station. Using stereoscopic image-sensors and computer vision, the relative attitude determination and the relative distance determination algorithms estimate the inspection satellite's relative position in relation to its host spacecraft. An algorithm needed to calibrate the stereo camera system is presented, and this calibration method is discussed. These relative navigation algorithms are tested in NASA Johnson Space Center's simulation software, Engineering Dynamic On-board Ubiquitous Graphics (DOUG) Graphics for Exploration (EDGE), using a rendered model of the International Space Station to serve as the host spacecraft. Both vision-based algorithms proved to attain successful results, and the recommended future work is discussed.

  12. Implementation of Computational Electromagnetic on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Now the new generation of technology could raise the bar for distributed computing. It seems to be a trend to solve computational electromagnetic work on a distributed system with parallel computing techniques. In this paper, we analyze the parallel characteristics of the distributed system and the possibility of setting up a tightly coupled distributed system by using LAN in our lab. The analysis of the performance of different computational methods, such as FEM, MOM, FDTD and finite difference method, are given. Our work on setting up a distributed system and the performance of the test bed is also included. At last, we mention the implementation of one of our computational electromagnetic codes.

  13. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  14. REAL TIME SYSTEM OPERATIONS 2006-2007

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph H.; Parashar, Manu; Lewis, Nancy Jo

    2008-08-15

    The Real Time System Operations (RTSO) 2006-2007 project focused on two parallel technical tasks: (1) Real-Time Applications of Phasors for Monitoring, Alarming and Control; and (2) Real-Time Voltage Security Assessment (RTVSA) Prototype Tool. The overall goal of the phasor applications project was to accelerate adoption and foster greater use of new, more accurate, time-synchronized phasor measurements by conducting research and prototyping applications on California ISO's phasor platform - Real-Time Dynamics Monitoring System (RTDMS) -- that provide previously unavailable information on the dynamic stability of the grid. Feasibility assessment studies were conducted on potential application of this technology for small-signal stability monitoring, validating/improving existing stability nomograms, conducting frequency response analysis, and obtaining real-time sensitivity information on key metrics to assess grid stress. Based on study findings, prototype applications for real-time visualization and alarming, small-signal stability monitoring, measurement based sensitivity analysis and frequency response assessment were developed, factory- and field-tested at the California ISO and at BPA. The goal of the RTVSA project was to provide California ISO with a prototype voltage security assessment tool that runs in real time within California ISO?s new reliability and congestion management system. CERTS conducted a technical assessment of appropriate algorithms, developed a prototype incorporating state-of-art algorithms (such as the continuation power flow, direct method, boundary orbiting method, and hyperplanes) into a framework most suitable for an operations environment. Based on study findings, a functional specification was prepared, which the California ISO has since used to procure a production-quality tool that is now a part of a suite of advanced computational tools that is used by California ISO for reliability and congestion management.

  15. Integrated ADIOS-IGENPRO operator advisory support system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Young; Park, J. H.; Kim, J. T.; Kim, C. H.; Park, W. M.; Hwang, I. K.; Cheon, S. W.; Song, S. J

    2001-05-01

    The I and C systems and control rooms of nuclear power plants have been constructed by using the automatic control concept and changed to computer-based systems in nowadays. For Increase of an automation and CRT, the role of operators is changed to monitor the condition of the nuclear power plants. Therefore, the information that is offered to operators has to integrate in order for operator to understand the hole condition of plants. In commercial nuclear plants, raw data of sensors and components are shown in a control room. So, operators can not diagnose the condition of plants correctly. For a development of an integrated operator aid system which contain an alarm processing system and a fault diagnosis system, we integrated IGENPRO of ANL(Argonne National Lab.) and ADIOS of KAERI (Korea Atomic Energy Institute). IGENPRO is a fault diagnosis system contains three module such as PROTREN, PRODIAG and PROTREN. ADIOS is an alarm processing system that informs operators of important alarms. The integrated operator advisory support system developed in the research is composed of an alarm processing module and a fault diagnosis module. The alarm processing module shows important alarms to operator by using dynamic alarm filtering methods. The fault diagnosis module shows the cause of faults of sensors and hardwares.

  16. Experience Building and Operating the CMS Tier-1 Computing Centres

    CERN Document Server

    Albert, M; Bonacorsi, D; Brew, C; Charlot, C; Huang, Chih-Hao; Colling, D; Dumitrescu, C; Fagan, D; Fassi, F; Fisk, I; Flix, J; Giacchetti, L; Gomez-Ceballos, G; Gowdy, S; Grandi, C; Gutsche, O; Hahn, K; Holzman, B; Jackson, J; Kreuzer, P; Kuo, C M; Mason, D; Pukhaeva, N; Qin, G; Quast, G; Rossman, P; Sartirana, A; Scheurer, A; Schott, G; Shih, J; Tader, P; Thompson, R; Tiradani, A; Trunov, A

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability ...

  17. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  18. Operations Monitoring Assistant System Design

    Science.gov (United States)

    1986-07-01

    subsections address these system , tesi ~n issues in tiirn. 3.1 OMA SYSTEM OVERVIEW Figure 3-1 presents the concept in Figure 2-1 in more detail, from an OMA...issues---a local agent cannot realistically Tell -he centralized planner everything about its current situation. and must instead ,t- cide what relevant

  19. Determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation

    Science.gov (United States)

    Blocksome, Michael A.

    2011-12-20

    Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.

  20. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  1. Unmanned Surface Vehicle Human-Computer Interface for Amphibious Operations

    Science.gov (United States)

    2013-08-01

    FIGURES Figure 1. MOCU Baseline HCI using Both Aerial Photo and Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air...Action DNC Digital Nautical Chart FNC Future Naval Capability HCI Human-Computer Interface HRI Human-Robot Interface HSI Human-Systems Integration...Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air Vehicles. 3.2 BASELINE MOCU HCI The Baseline MOCU interface is a tiled

  2. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....

  3. Deepen the Teaching Reform of Operating System, Cultivate the Comprehensive Quality of Students

    Science.gov (United States)

    Liu, Jianjun

    2010-01-01

    Operating system is the core course of the specialty of computer science and technology. To understand and master the operating system will directly affect students' further study on other courses. The course of operating system focuses more on theories. Its contents are more abstract and the knowledge system is more complicated. Therefore,…

  4. System of Systems Operational Analysis Within a Common Operational Context

    Science.gov (United States)

    2005-06-22

    detection UNCLASSIFIED • The use of the Advanced Refractive Effects Prediction System (AREPS) produced by SPAWAR provides the effects of specific radar ...parameters to build the characteristic AEW or interceptor radar • Provides Probability of Detection vs. Range plots • Import into MATLAB to curve-fit...for later use Sensor Information from AREPS UNCLASSIFIED Sensor Information from AREPS Example AREPS Pd vs. Range raytrace output Elevated Sensor

  5. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  6. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  7. Protected quantum computing: interleaving gate operations with dynamical decoupling sequences.

    Science.gov (United States)

    Zhang, Jingfu; Souza, Alexandre M; Brandao, Frederico Dias; Suter, Dieter

    2014-02-07

    Implementing precise operations on quantum systems is one of the biggest challenges for building quantum devices in a noisy environment. Dynamical decoupling attenuates the destructive effect of the environmental noise, but so far, it has been used primarily in the context of quantum memories. Here, we experimentally demonstrate a general scheme for combining dynamical decoupling with quantum logical gate operations using the example of an electron-spin qubit of a single nitrogen-vacancy center in diamond. We achieve process fidelities >98% for gate times that are 2 orders of magnitude longer than the unprotected dephasing time T2.

  8. Computer controlled vent and pressurization system

    Science.gov (United States)

    Cieslewicz, E. J.

    1975-01-01

    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  9. Operational expert system applications in Europe

    CERN Document Server

    Zarri, Gian Piero

    1992-01-01

    Operational Expert System Applications in Europe describes the representative case studies of the operational expert systems (ESs) that are used in Europe.This compilation provides examples of operational ES that are realized in 10 different European countries, including countries not usually examined in the standard reviews of the field.This book discusses the decision support system using several artificial intelligence tools; expert systems for fault diagnosis on computerized numerical control (CNC) machines; and expert consultation system for personal portfolio management. The failure prob

  10. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  11. LHCb Conditions database operation assistance systems

    Science.gov (United States)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  12. Highlights of the GURI hydroelectric plant computer control system

    Energy Technology Data Exchange (ETDEWEB)

    Dal Monte, R.; Banakar, H.; Hoffman, R.; Lebeau, M.; Schroeder, R.

    1988-07-01

    The GURI power plant on the Caroni river in Venezuela has 20 generating units with a total capacity of 10,000 MW, the largest currently operating in the world. The GURI Computer Control System (GCS) provides for comprehensive operation management of the entire power plant and the adjacent switchyards. This article describes some highlights of the functions of the state-of-the-art system. The topics considered include the operating modes of the remote terminal units (RTUs), automatic start/stop of generating units, RTU closed-loop control, automatic generation and voltage control, unit commitment, operator training stimulator, and maintenance management.

  13. Systems Management of Air Force Standard Communications-Computer systems: There is a Better Way

    Science.gov (United States)

    1988-04-01

    Statements of Operational Needs (SON), Justification for Major New Start (JMSNS), and Joint Service Operational Requirements ( JSOR ). Communications...computer systems requirements which must be processed using a SON, JMSNS, and JSOR are processed in accordance with AFR 57-1, Operational Needs, and

  14. Computational Metabolomics Operations at BioCyc.org

    Directory of Open Access Journals (Sweden)

    Peter D. Karp

    2015-05-01

    Full Text Available BioCyc.org is a genome and metabolic pathway web portal covering 5500 organisms, including Homo sapiens, Arabidopsis thaliana, Saccharomyces cerevisiae and Escherichia coli. These organism-specific databases have undergone variable degrees of curation. The EcoCyc (Escherichia coli Encyclopedia database is the most highly curated; its contents have been derived from 27,000 publications. The MetaCyc (Metabolic Encyclopedia database within BioCyc is a “universal” metabolic database that describes pathways, reactions, enzymes and metabolites from all domains of life. Metabolic pathways provide an organizing framework for analyzing metabolomics data, and the BioCyc website provides computational operations for metabolomics data that include metabolite search and translation of metabolite identifiers across multiple metabolite databases. The site allows researchers to store and manipulate metabolite lists using a facility called SmartTables, which supports metabolite enrichment analysis. That analysis operation identifies metabolite sets that are statistically over-represented for the substrates of specific metabolic pathways. BioCyc also enables visualization of metabolomics data on individual pathway diagrams and on the organism-specific metabolic map diagrams that are available for every BioCyc organism. Most of these operations are available both interactively and as programmatic web services.

  15. System of Operator Quasi Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Suhel Ahmad Khan

    2014-01-01

    Full Text Available We consider a system of operator quasi equilibrium problems and system of generalized quasi operator equilibrium problems in topological vector spaces. Using a maximal element theorem for a family of set-valued mappings as basic tool, we derive some existence theorems for solutions to these problems with and without involving Φ-condensing mappings.

  16. Overreaction to External Attacks on Computer Systems Could Be More Harmful than the Viruses Themselves.

    Science.gov (United States)

    King, Kenneth M.

    1988-01-01

    Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)

  17. A Computer System for a Faculty of Education.

    Science.gov (United States)

    Hallworth, Herbert J.

    A computer system, introduced for use in statistics courses within a college of education, features the performance of a variety of functions, a relatively economic operation, and the facilitation of placing remote terminals in schools. The system provides an interactive statistics laboratory in which the student learns to write programs for the…

  18. Computer Simulation and Computabiblity of Biological Systems

    CERN Document Server

    Baianu, I C

    2004-01-01

    The ability to simulate a biological organism by employing a computer is related to the ability of the computer to calculate the behavior of such a dynamical system, or the "computability" of the system. However, the two questions of computability and simulation are not equivalent. Since the question of computability can be given a precise answer in terms of recursive functions, automata theory and dynamical systems, it will be appropriate to consider it first. The more elusive question of adequate simulation of biological systems by a computer will be then addressed and a possible connection between the two answers given will be considered as follows. A symbolic, algebraic-topological "quantum computer" (as introduced in Baianu, 1971b) is here suggested to provide one such potential means for adequate biological simulations based on QMV Quantum Logic and meta-Categorical Modeling as for example in a QMV-based, Quantum-Topos (Baianu and Glazebrook,2004.

  19. The Computational Complexity of Evolving Systems

    NARCIS (Netherlands)

    Verbaan, P.R.A.

    2006-01-01

    Evolving systems are systems that change over time. Examples of evolving systems are computers with soft-and hardware upgrades and dynamic networks of computers that communicate with each other, but also colonies of cooperating organisms or cells within a single organism. In this research, several m

  20. SD-CAS: Spin Dynamics by Computer Algebra System.

    Science.gov (United States)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  1. A Framework for Adaptable Operating and Runtime Systems

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, Thomas

    2014-03-04

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain through technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.

  2. Dynamic System Using Conjunctive Operator

    Directory of Open Access Journals (Sweden)

    József Dombi

    2006-01-01

    Full Text Available We present a tool to describe and simulate dynami systems. We use positive andnegative influences. Our starting point is aggregation. We build positive and negativeeffects with proper transformations of the sigmoid function and using the conjunctiveoperator. From the input we calculate the output effect with the help of the aggregationoperator. This algorithm is comparable with the concept of fuzzy cognitive maps.

  3. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  4. Design and Implementation of Air Conditioning System in Operating Room

    Directory of Open Access Journals (Sweden)

    Htet Htet Aung

    2014-10-01

    Full Text Available The system is air conditioning system in operating room. The main objective of the system was implemented to provide air balance and temperature necessary conditions and to control airflow system for ventilation units in operating room. The operation room can be controlled with fuzzy expert system and describes the desired outputs. Input parameters such as temperature, humidity, oxygen and particle are used and output parameters are chosen as air conditioning motor speed and exhaust motor speed. Input parameters of the system are taken into account optimal conditions based on oxygen as medium and other parameters are chosen minimum condition for operating room. The airflow control system is determined the two components: the airflow block and the thermal block for ventilation units in operating room. The mathematical modeling of each such system based on a computational procedure and to combine them together in an efficient manner. Whether it supports to the most suitable control for the system prototype was determined by simulating the operation with varying the number of personnel and duration of time. Finally, according to the combination of temperature and airflow regulations with PI controller, the results of simulation of the entire ventilation unit control system is obtained.

  5. ACSES, An Automated Computer Science Education System.

    Science.gov (United States)

    Nievergelt, Jurg; And Others

    A project to accommodate the large and increasing enrollment in introductory computer science courses by automating them with a subsystem for computer science instruction on the PLATO IV Computer-Based Education system at the University of Illinois was started. The subsystem was intended to be used for supplementary instruction at the University…

  6. Modernisation of the CERN operational dosimetery system

    CERN Multimedia

    Caroline Duc

    2013-01-01

     As part of continual efforts to ensure the highest standards in the field of Radiation Protection, CERN is modernising its operational dosimetry system.   Pierre Carbonez, head of the Dosimetery and Instrument Calibration Service, with one of the new automatic operational dosimetery reader terminals. No more sheets of paper to record radiation doses at the entrance to the accelerators: the operational dosimetry system is being modernised! Since March, 50 automatic operational dosimeter reader terminals are in operation around the accelerator complex. Operational dosimeters (DMC) complement the "passive" dosimeters (DIS) and must be used every time you enter Controlled Radiation Areas. They measure the dose of radiation received by the exposed worker in real time and give a warning if the acceptable threshold is exceeded. The new dosimeter reader system allows the dosage recording procedure to be automated. “Every pers...

  7. Technology development for remote, computer-assisted operation of a continuous mining machine

    Energy Technology Data Exchange (ETDEWEB)

    Schnakenberg, G.H. [Pittsburgh Research Center, PA (United States)

    1993-12-31

    The U.S. Bureau of Mines was created to conduct research to improve the health, safety, and efficiency of the coal and metal mining industries. In 1986, the Bureau embarked on a new, major research effort to develop the technology that would enable the relocation of workers from hazardous areas to areas of relative safety. This effort is in contrast to historical efforts by the Bureau of controlling or reducing the hazardous agent or providing protection to the worker. The technologies associated with automation, robotics, and computer software and hardware systems had progressed to the point that their use to develop computer-assisted operation of mobile mining equipment appeared to be a cost-effective and accomplishable task. At the first International Symposium of Mine Mechanization and Automation, an overview of the Bureau`s computer-assisted mining program for underground coal mining was presented. The elements included providing computer-assisted tele-remote operation of continuous mining machines, haulage systems and roof bolting machines. Areas of research included sensors for machine guidance and for coal interface detection. Additionally, the research included computer hardware and software architectures which are extremely important in developing technology that is transferable to industry and is flexible enough to accommodate the variety of machines used in coal mining today. This paper provides an update of the research under the computer-assisted mining program.

  8. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  9. Linearly programmed DNA-based molecular computer operated on magnetic particle surface in test-tube

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jian; ZHANG Zhizhou; SHI Yongyong; Li Xiuxia; HE Lin

    2004-01-01

    The postgenomic era has seen an emergence of new applications of DNA manipulation technologies, including DNA-based molecular computing. Surface DNA computing has already been reported in a number of studies that, however, all employ different mechanisms other than automaton functions. Here we describe a programmable DNA surface-computing device as a Turing machine-like finite automaton. The laboratory automaton is primarily composed of DNA (inputs, output-detectors, transition molecules as software), DNA manipulating enzymes and buffer system that solve artificial computational problems autonomously. When fluoresceins were labeled in the 5′ end of (-) strand of the input molecule, direct observation of all reaction intermediates along the time scale was made so that the dynamic process of DNA computing could be conveniently visualized. The features of this study are: (i) achievement of finite automaton functions by linearly programmed DNA computer operated on magnetic particle surface and (ii) direct detection of all DNA computing intermediates by capillary electrophoresis. Since DNA computing has the massive parallelism and feasibility for automation, this achievement sets a basis for large-scale implications of DNA computing for functional genomics in the near future.

  10. The structure of the clouds distributed operating system

    Science.gov (United States)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data

  11. The structure of the clouds distributed operating system

    Science.gov (United States)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data

  12. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  13. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  14. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  15. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Blocksome, Michael A

    2015-02-17

    Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

  16. B190 computer controlled radiation monitoring and safety interlock system

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa, D L; Fields, W F; Gittins, D E; Roberts, M L

    1998-08-01

    The Center for Accelerator Mass Spectrometry (CAMS) in the Earth and Environmental Sciences Directorate at Lawrence Livermore National Laboratory (LLNL) operates two accelerators and is in the process of installing two new additional accelerators in support of a variety of basic and applied measurement programs. To monitor the radiation environment in the facility in which these accelerators are located and to terminate accelerator operations if predetermined radiation levels are exceeded, an updated computer controlled radiation monitoring system has been installed. This new system also monitors various machine safety interlocks and again terminates accelerator operations if machine interlocks are broken. This new system replaces an older system that was originally installed in 1988. This paper describes the updated B190 computer controlled radiation monitoring and safety interlock system.

  17. Task allocation in a distributed computing system

    Science.gov (United States)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  18. INJECT AN ELASTIC GRID COMPUTING TECHNIQUES TO OPTIMAL RESOURCE MANAGEMENT TECHNIQUE OPERATIONS

    Directory of Open Access Journals (Sweden)

    R. Surendran

    2013-01-01

    Full Text Available Evaluation of sharing on the Internet well- developed from energetic technique of grid computing. Dynamic Grid Computing is Resource sharing in large level high performance computing networks at worldwide. Existing systems have a Limited innovation for resource management process. In proposed work, Grid Computing is an Internet based computing for Optimal Resource Management Technique Operations (ORMTO. ORMTO are Elastic scheduling algorithm, finding the Best Grid node for a task prediction, Fault tolerance resource selection, Perfect resource co-allocation, Grid balanced Resource matchmaking and Agent based grid service, wireless mobility resource access. Survey the various resource management techniques based on the performance measurement factors like time complexity, Space complexity and Energy complexity find the ORMTO with Grid computing. Objectives of ORMTO will provide an efficient Resource co-allocation automatically for a user who is submitting the job without grid knowledge, design a Grid service (portal for selects the Best Fault tolerant Resource for a given task in a fast, secure and efficient manner and provide an Enhanced grid balancing system for multi-tasking via Hybrid topology based Grid Ranking. Best Quality of Service (QOS parameters are important role in all RMT. Proposed system ORMTO use the greater number of QOS Parameters for better enhancement of existing RMT. In proposed system, follow the enhanced techniques and algorithms use to improve the Grid based ORMTO.

  19. Information and computer-aided system for structural materials

    Energy Technology Data Exchange (ETDEWEB)

    Nekrashevitch, Yu.G.; Nizametdinov, Sh.U.; Polkovnikov, A.V.; Rumjantzev, V.P.; Surina, O.N. (Engineering Physics Inst., Moscow (Russia)); Kalinin, G.M.; Sidorenkov, A.V.; Strebkov, Yu.S. (Research and Development Inst. of Power Engineering, Moscow (Russia))

    1992-09-01

    An information and computer-aided system for structural materials data has been developed to provide data for the fusion and fission reactor system design. It is designed for designers, industrial engineers, and material science specialists and provides a friendly interface in an interactive mode. The database for structural materials contains the master files: Chemical composition, physical, mechanical, corrosion, technological properties, regulatory and technical documentation. The system is implemented on a PC/AT running the PS /2 operating system. (orig.).

  20. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  1. Island operation - modelling of a small hydro power system

    Energy Technology Data Exchange (ETDEWEB)

    Skarp, Stefan

    2000-02-01

    Simulation is a useful tool for investigating a system behaviour. It is a way to examine operating situations without having to perform them in reality. If someone for example wants to test an operating situation where the system possibly will demolish, a computer simulation could be a both cheaper and safer way than to do the test in reality. This master thesis performs and analyses a simulation, modelling an electronic power system. The system consists of a minor hydro power station, a wood refining industry, and interconnecting power system components. In the simulation situation the system works in a so called island operation. The thesis aims at making a capacity analysis of the current system. Above all, the goal is to find restrictions in load power profile of the consumer, under given circumstances. The computer software used in simulations is Matlab and its additional program PSB (Power System Blockset). The work has been carried out in co-operation with the power supplier Skellefteaa Kraft, where the problem formulation of this master thesis was founded.

  2. Some Experiences on BEPCII SRF System Operation

    CERN Document Server

    Tong-ming, Huang; Peng, Sha; Yi, Sun; Wei-min, Pan; Guang-wei, Wang; Jian-ping, Dai; Zhong-quan, Li; Qiang, Ma; Qun-yao, Wang; Guang-yuan, Zhao; Zheng-hui, Mi

    2014-01-01

    The Superconducting Radio Frequency (SRF) system of the upgrade project of Beijing Electron Positron Collider (BEPCII) has been in operation for almost 8 years. The SRF system has accelerated both electron and positron at the design beam current of 910 mA successfully, and a high beam intensity colliding of 860 mA (electron)*910 mA (positron) has been achieved in April 2014. Many problems were encountered during the operation, among which some were solved and some remain unsolved. This paper will describe some experiences on BEPCII SRF system operation, including the symptoms, causes and solutions.

  3. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  4. Formal Protection Architecture for Cloud Computing System

    Institute of Scientific and Technical Information of China (English)

    Yasha Chen; Jianpeng Zhao; Junmao Zhu; Fei Yan

    2014-01-01

    Cloud computing systems play a vital role in national securi-ty. This paper describes a conceptual framework called dual-system architecture for protecting computing environments. While attempting to be logical and rigorous, formalism meth-od is avoided and this paper chooses algebra Communication Sequential Process.

  5. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  6. Computer-Controlled, Motorized Positioning System

    Science.gov (United States)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  7. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  8. VLT Data Flow System Begins Operation

    Science.gov (United States)

    1999-06-01

    their proposed observations and provide accurate estimates of the amount of telescope time they will need to complete their particular scientific programme. Once the proposals have been reviewed by the OPC and telescope time is awarded by the ESO management according to the recommendation by this Committee, the successful astronomers begin to assemble detailed descriptions of their intended observations (e.g. position in the sky, time and duration of the observation, the instrument mode, etc.) in the form of computer files called Observation Blocks (OBs) . The software to make OBs is distributed by ESO and used by the astronomers at their home institutions to design their observing programs well before the observations are scheduled at the telescope. The OBs can then be directly executed by the VLT and result in an increased efficiency in the collection of raw data (images, spectra) from the science instruments on the VLT. The activation (execution) of OBs can be done by the astronomer at the telescope on a particular set of dates ( visitor mode operation) or it can be done by ESO science operations astronomers at times which are optimally suited for the particular scientific programme ( service mode operation). An enormous VLT Data Archive ESO PR Photo 25b/99 ESO PR Photo 25b/99 [Preview - JPEG: 400 x 465 pix - 160k] [Normal - JPEG: 800 x 929 pix - 568k] [High-Res - JPEG: 3000 x 3483 pix - 5.5M] Caption to ESO PR Photo 25b/99 : The first of several DVD storage robot at the VLT Data Archive at the ESO headquarters include 1100 DVDs (with a total capacity of about 16 Terabytes) that may be rapidly accessed by the archive software system, ensuring fast availbility of the requested data. The raw data generated at the telescope are stored by an archive system that sends these data regularly back to ESO headquarters in Garching (Germany) in the form of CD and DVD ROM disks. While the well-known Compact Disks (CD ROMs) store about 600 Megabytes (600,000,000 bytes) each, the

  9. System security in the space flight operations center

    Science.gov (United States)

    Wagner, David A.

    1988-01-01

    The Space Flight Operations Center is a networked system of workstation-class computers that will provide ground support for NASA's next generation of deep-space missions. The author recounts the development of the SFOC system security policy and discusses the various management and technology issues involved. Particular attention is given to risk assessment, security plan development, security implications of design requirements, automatic safeguards, and procedural safeguards.

  10. Biomolecular computing systems: principles, progress and potential.

    Science.gov (United States)

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  11. Public Address Systems. Specifications - Installation - Operation.

    Science.gov (United States)

    Palmer, Fred M.

    Provisions for public address in new construction of campus buildings (specifications, installations, and operation of public address systems), are discussed in non-technical terms. Consideration is given to microphones, amplifiers, loudspeakers and the placement and operation of various different combinations. (FS)

  12. SYSTEM OF STANDARTIZATION OF CONSTRUCTION OPERATIONS ARRANGEMENT

    Directory of Open Access Journals (Sweden)

    Oleynik Pavel Pavlovich

    2012-10-01

    Full Text Available In the proposed article, management of construction operations is represented as a multi-level system; it is considered with the framework of projects including new construction, restructuring and overhaul of buildings and structures. The system of management of construction operations is to be composed of the following three constituent parts. They include a construction and assembling entity, project and operations, and a procurement base. Such matters as the quality of construction products, purchase (lease of building machinery and vehicles are incorporated into the level of the construction and assembling entity. The project level comprises such components of construction operations management as pre-construction preparation of a project, methods and forms of construction management, preparatory works, management of construction activities, real-time operations control, construction quality control, etc. The level of operations and the procurement base covers the needs for materials and equipment, their purchase and procurement, as well as the warehouse management. The main elements of the standardization system are identified. Standards of construction operations management are explained, including 1. General Provisions; 2. Preparation and performance of construction and assembling works; 3. New construction. Building site organization; 4. Demolition (dismantling of buildings and structures; 5. Rules of preparation for acceptance and commissioning of completed residential buildings. The prospects for the further development of the system of standardization of construction operations management are outlined

  13. Automated Diversity in Computer Systems

    Science.gov (United States)

    2005-09-01

    P ( EBM I ) = Me2a ; P (ELMP ) = ps and P (EBMP ) = ps. We are interested in the probability of a successful branch (escape) out of a sequence of n...reference is still le- gal. Both can generate false positives, although CRED is less computationally expensive. The common theme in all these

  14. Development of control system in abdominal operating ROV

    Directory of Open Access Journals (Sweden)

    ZHANG Weikang

    2017-03-01

    Full Text Available In order to satisfy all the requirements of Unmanned Underwater Vehicle(UUVrecovery tasks, a new type of abdominal operating Remote Operated Vehicle(ROV was developed. The abdominal operating ROV is different from the general ROV which works by a manipulator, as it completes the docking and recovery tasks of UUVs with its abdominal operating mechanism. In this paper, the system composition and principles of the abdominal operating ROV are presented. We then propose a framework for a control system in which the integrated industrial reinforced computer acts as a surface monitor unit, while the PC104 embedded industrial computer acts as the underwater master control unit and the other drive boards act as the driver unit. In addition, the dynamics model and a robust H-infinity controller for automatic orientation in the horizontal plane were designed and built. Single tests, system tests and underwater tests show that this control system has good real-time performance and reliability, and it can complete the recovery task of a UUV. The presented structure and algorithm could have reference significance to the control system development of mobile robots, drones, and biomimetic robot.

  15. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    Science.gov (United States)

    Stoughton, John W. (Inventor); Mielke, Roland V. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  16. Nuclear Materials Identification System Operational Manual

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, L.G.

    2001-04-10

    This report describes the operation and setup of the Nuclear Materials Identification System (NMIS) with a {sup 252}Cf neutron source at the Oak Ridge Y-12 Plant. The components of the system are described with a description of the setup of the system along with an overview of the NMIS measurements for scanning, calibration, and confirmation of inventory items.

  17. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  18. Operational reliability of standby safety systems

    Energy Technology Data Exchange (ETDEWEB)

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  19. PCOS - An operating system for modular applications

    Science.gov (United States)

    Tharp, V. P.

    1986-01-01

    This paper is an introduction to the PCOS operating system for the MC68000 family processors. Topics covered are: development history; development support; rational for development of PCOS and salient characteristics; architecture; and a brief comparison of PCOS to UNIX.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  1. Towards an operating system for intercloud

    NARCIS (Netherlands)

    Strijkers, R.J.; Cushing, R.; Makkes, M.X.; Meulenhoff, P.J.; Belloum, A.; Laat, C.D.; Meijer, R.J.

    2013-01-01

    Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack

  2. Towards an operating system for intercloud

    NARCIS (Netherlands)

    Strijkers, R.J.; Cushing, R.; Makkes, M.X.; Meulenhoff, P.J.; Belloum, A.; Laat, C.D.; Meijer, R.J.

    2013-01-01

    Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack mecha

  3. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...

  4. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  5. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...

  6. A System for Monitoring and Management of Computational Grids

    Science.gov (United States)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  7. Three computer codes to read, plot and tabulate operational test-site recorded solar data

    Science.gov (United States)

    Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.

    1980-01-01

    Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.

  8. Study of operational parameters impacting helicopter fuel consumption. [using computer techniques (computer programs)

    Science.gov (United States)

    Cross, J. L.; Stevens, D. D.

    1976-01-01

    A computerized study of operational parameters affecting helicopter fuel consumption was conducted as an integral part of the NASA Civil Helicopter Technology Program. The study utilized the Helicopter Sizing and Performance Computer Program (HESCOMP) developed by the Boeing-Vertol Company and NASA Ames Research Center. An introduction to HESCOMP is incorporated in this report. The results presented were calculated using the NASA CH-53 civil helicopter research aircraft specifications. Plots from which optimum flight conditions for minimum fuel use that can be obtained are presented for this aircraft. The results of the study are considered to be generally indicative of trends for all helicopters.

  9. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  10. Establishing performance requirements of computer based systems subject to uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  11. Computer Bits: The Ideal Computer System for Your Center.

    Science.gov (United States)

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  12. Effective operator formalism for open quantum systems

    DEFF Research Database (Denmark)

    Reiter, Florentin; Sørensen, Anders Søndberg

    2012-01-01

    We present an effective operator formalism for open quantum systems. Employing perturbation theory and adiabatic elimination of excited states for a weakly driven system, we derive an effective master equation which reduces the evolution to the ground-state dynamics. The effective evolution...... involves a single effective Hamiltonian and one effective Lindblad operator for each naturally occurring decay process. Simple expressions are derived for the effective operators which can be directly applied to reach effective equations of motion for the ground states. We compare our method...

  13. Method and Apparatus Providing Deception and/or Altered Operation in an Information System Operating System

    Science.gov (United States)

    Cohen, Fred; Rogers, Deanna T.; Neagoe, Vicentiu

    2008-10-14

    A method and/or system and/or apparatus providing deception and/or execution alteration in an information system. In specific embodiments, deceptions and/or protections are provided by intercepting and/or modifying operation of one or more system calls of an operating system.

  14. Viability of Hybrid Systems A Controllability Operator Approach

    CERN Document Server

    Labinaz, G

    2012-01-01

    The problem of viability of hybrid systems is considered in this work. A model for a hybrid system is developed including a means of including three forms of uncertainty: transition dynamics, structural uncertainty, and parametric uncertainty. A computational basis for viability of hybrid systems is developed and applied to three control law classes. An approach is developed for robust viability based on two extensions of the controllability operator. The three-tank example is examined for both the viability problem and robust viability problem. The theory is applied through simulation to an active magnetic bearing system and to a batch polymerization process showing that viability can be satisfied in practice. The problem of viable attainability is examined based on the controllability operator approach introduced by Nerode and colleagues. Lastly, properties of the controllability operator are presented.

  15. An Optical Tri-valued Computing System

    Directory of Open Access Journals (Sweden)

    Junjie Peng

    2014-03-01

    Full Text Available A new optical computing experimental system is presented. Designed based on tri-valued logic, the system is built as a photoelectric hybrid computer system which is much advantageous over its electronic counterparts. Say, the tri-valued logic of the system guarantees that it is more powerful in information processing than that of systems with binary logic. And the optical characteristic of the system makes it be much capable in huge data processing than that of the electronic computers. The optical computing system includes two parts, electronic part and optical part. The electronic part consists of a PC and two embedded systems which are used for data input/output, monitor, synchronous control, user data combination and separation and so on. The optical part includes three components. They are optical encoder, logic calculator and decoder. It mainly answers for encoding the users' requests into tri-valued optical information, computing and processing the requests, decoding the tri-valued optical information to binary electronic information and so forth. Experiment results show that the system is quite right in optical information processing which demonstrates the feasibility and correctness of the optical computing system.

  16. Computing an operating parameter of a unified power flow controller

    Science.gov (United States)

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  17. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  18. A Wearable Computing System for Dynamic Locating of Parking Spaces

    OpenAIRE

    Damian Mrugala; Alexander Dannies; Walter Lang

    2010-01-01

    This paper describes a dynamic locating system implemented in an autonomous wearable computing system for the automobile warehouse management application. Since the first prototype is developed as jacket [1], this prototype is miniaturized and therefore realized as holster which consists of several modules for identification, communication and localization. It is worn by employees during warehousing of automobiles. The modules collect data, which are used by the operating system to calculate ...

  19. A Wearable Computing System for Dynamic Locating of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Damian Mrugala

    2010-07-01

    Full Text Available This paper describes a dynamic locating system implemented in an autonomous wearable computing system for the automobile warehouse management application. Since the first prototype is developed as jacket [1], this prototype is miniaturized and therefore realized as holster which consists of several modules for identification, communication and localization. It is worn by employees during warehousing of automobiles. The modules collect data, which are used by the operating system to calculate the location of parking spaces dynamically.

  20. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  1. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  2. SNS Target Systems initial operating experience

    Science.gov (United States)

    McManamy, T.; Forester, J.

    2009-02-01

    The SNS mercury target started operation with low beam power when commissioned on April 28, 2006. The beam power has been following a planned ramp up since then and has reached 340 kW as of February 2008. The target systems supporting neutron production include the target and mercury loop, the cryogenic and ambient moderator systems, reflector and vessel systems, bulk shielding and shutters systems, utility systems, remote handling systems and the associated instrumentation and controls. Availability for these systems has improved with time and reached 100% for the first 2000 hour neutron production run in fiscal year 2008. An overview of the operating experience and the planning to support continued power increases to 1.4 MW for these systems will be given in this paper.

  3. Utilization of Computer Technology in the Third World: An Evaluation of Computer Operations at the University of Honduras.

    Science.gov (United States)

    Shermis, Mark D.

    This report of the results of an evaluation of computer operations at the University of Honduras (Universidad Nacional Autonoma de Honduras) begins by discussing the problem--i.e., poor utilization of the campus mainframe computer--and listing the hardware and software available in the computer center. Data collection methods are summarized,…

  4. Operation experience with the LHC RF system

    CERN Document Server

    Arnaudon, L; Brunner, O; Butterworth, A

    2010-01-01

    The LHC ACS RF system is composed of 16 superconducting cavities, eight per ring, housed in a total of four cryomodules each containing four cavities. Each cavity is powered by a 300 kW klystron. The ACS RF power control system is based on industrial Programmable Logic Controllers (PLCs), with additional fast RF interlock protection systems. The Low Level RF (LLRF) is implemented in VME crates. Operational performance and reliability are described. A full set of user interfaces, both for experts and operators has been developed, with user feedback and maintenance issues as key points. Operational experience with the full RF chain, including the low level system, the beam control, the synchronization system and optical fibers distribution is presented. Last but not least overall performance and reliability based on experience with first beam are reviewed and perspectives for future improvement outlined.

  5. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    Science.gov (United States)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  6. A computational system for a Mars rover

    Science.gov (United States)

    Lambert, Kenneth E.

    1989-01-01

    This paper presents an overview of an onboard computing system that can be used for meeting the computational needs of a Mars rover. The paper begins by presenting an overview of some of the requirements which are key factors affecting the architecture. The rest of the paper describes the architecture. Particular emphasis is placed on the criteria used in defining the system and how the system qualitatively meets the criteria.

  7. Chandrasekhar equations and computational algorithms for distributed parameter systems

    Science.gov (United States)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  8. Large Scale Development of Computer-Based Instructional Systems.

    Science.gov (United States)

    Olivier, William P.; Scott, G.F.

    The Individualization Project at the Ontario Institute for Studies in Education (OISE) was organized on a cooperative basis with a federal agency and several community colleges to move smoothly from R&D to a production mode of operation, and finally to emphasize dissemination of computer courseware and systems. The key to the successful…

  9. Investing in Computer Technology: Criteria and Procedures for System Selection.

    Science.gov (United States)

    Hofstetter, Fred T.

    The criteria used by the University of Delaware in selecting the PLATO computer-based educational system are discussed in this document. Consideration was given to support for instructional strategies, requirements of the student learning station, features for instructors and authors of instructional materials, general operational characteristics,…

  10. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  11. Multiprocessing system for performing floating point arithmetic operations

    Energy Technology Data Exchange (ETDEWEB)

    Nguyenphu, M.; Thatcher, L.E.

    1990-10-02

    This patent describes a data processing system. It comprises: a fixed point arithmetic processor means for performing fixed point arithmetic operations and including control means for decoding a floating point arithmetic instruction specifying a floating point arithmetic operation, and an addressing means for computing addresses for floating point data for the floating point operation from a memory means. The memory means for storing data and including means for receiving the addresses from the fixed point arithmetic processor means and providing the floating point data to a floating point arithmetic processor means; and the floating point arithmetic processor means for performing floating point arithmetic operations and including control means for decoding the floating point instruction and performing the specified floating point arithmetic operation upon the floating point data from the memory means.

  12. Computation of Weapons Systems Effectiveness

    Science.gov (United States)

    2013-09-01

    Aircraft Dive Angle : Initial Weapon Release Velocity at x-axis VOx VOz x: x-axis z: z-axis : Initial Weapon Release Velocity at z...altitude Impact Velocity (x− axis), Vix = VOx (3.4) Impact Velocity (z− axis), Viz = VOz + (g ∗ TOF) (3.5) Impact Velocity, Vi = �Vix2 + Viz2 (3.6...compute the ballistic partials to examine the effects that varying h, VOx and VOz have on RB using the following equations: ∂RB ∂h = New RB−Old RB

  13. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  14. The university computer network security system

    Institute of Scientific and Technical Information of China (English)

    张丁欣

    2012-01-01

    With the development of the times, advances in technology, computer network technology has been deep into all aspects of people's lives, it plays an increasingly important role, is an important tool for information exchange. Colleges and universities is to cultivate the cradle of new technology and new technology, computer network Yulu nectar to nurture emerging technologies, and so, as institutions of higher learning should pay attention to the construction of computer network security system.

  15. QUBIT DATA STRUCTURES FOR ANALYZING COMPUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vladimir Hahanov

    2014-11-01

    Full Text Available Qubit models and methods for improving the performance of software and hardware for analyzing digital devices through increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions necessary for the implementation of quantum computing when analyzing virtual computers are introduced. The investigation results concerning design and modeling computer systems in a cyberspace based on the use of two-component structure are presented.

  16. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  17. Co-operative Scheduled Energy Aware Load-Balancing technique for an Efficient Computational Cloud

    Directory of Open Access Journals (Sweden)

    T R V Anandharajan

    2011-03-01

    Full Text Available Cloud Computing in the recent years has been taking its evolution from the scientific to the non scientific and commercial applications. Power consumption and Load balancing are very important and complex problem in computational Cloud. A computational Cloud differs from traditional high-performance computing systems in the heterogeneity of the computing nodes, as well as the communication links that connect the different nodes together. Load Balancing is a very important component in the commodity services based cloud computing. There is a need to develop algorithms that can capture this complexity yet can be easily implemented and used to solve a wide range of load-balancing scenarios in a Data and Computing intensive applications. In this paper, we propose to find the best EFFICIENT cloud resource by Co-operative Power aware Scheduled Load Balancing solution to the Cloud load-balancing problem. The algorithm developed combines the inherent efficiency of the centralized approach, energy efficient and the fault-tolerant nature of the distributed environment like Cloud.

  18. Adaptation and optimization of basic operations for an unstructured mesh CFD algorithm for computation on massively parallel accelerators

    Science.gov (United States)

    Bogdanov, P. B.; Gorobets, A. V.; Sukov, S. A.

    2013-08-01

    The design of efficient algorithms for large-scale gas dynamics computations with hybrid (heterogeneous) computing systems whose high performance relies on massively parallel accelerators is addressed. A high-order accurate finite volume algorithm with polynomial reconstruction on unstructured hybrid meshes is used to compute compressible gas flows in domains of complex geometry. The basic operations of the algorithm are implemented in detail for massively parallel accelerators, including AMD and NVIDIA graphics processing units (GPUs). Major optimization approaches and a computation transfer technique are covered. The underlying programming tool is the Open Computing Language (OpenCL) standard, which performs on accelerators of various architectures, both existing and emerging.

  19. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  20. Integrated logistics management system for operation of machinery and equipment

    Directory of Open Access Journals (Sweden)

    Józef Frąś

    2014-09-01

    Full Text Available Background: The main issue in the operations of machinery and equipment, which is the subject of theoretical and empirical research is to provide high reliability and durability with qualitative post-trade services of machinery and equipment. Quality of service can be achieved through planned maintenance activities supported by computer technology. The article presents the concept of an integrated system of logistics management operation of machinery and equipment, especially special one for stationary transport equipment. At the outset, it emphasized the importance and essence of technological transport and storage systems storage in modern manufacturing enterprise. Then the objective and the method of research have been set. An essential part of deliberations in the article is the concept of integrated logistics management system operation for stationary transport equipment. Authors of this article have presented the results the implementation and operation of the system. The results are presented in a descriptive and graphic form. Methods: The purpose of this article is to present the concept of implementing an integrated logistics management system for operation of stationary transport equipment. It goes through combination of planning, event logging service, warehouse management in the field of spare parts, account and records of the cost of service activities. The paper presents an analysis and evaluation method of brainstorming a new approach to logistics management operation stationary transport equipment. Authors takes into account the specific conditions of use of transport equipment and conduct the service, which have a significant impact on the time and place of cost and service as well. It should be noted that the developed system has been implemented. It was also carried out an assessment of its functionality and efficiency as the new IT tool for logistics management operation. Results and conclusions: The paper presents a new

  1. Information Hiding based Trusted Computing System Design

    Science.gov (United States)

    2014-07-18

    and the environment where the system operates (electrical network frequency signals), and how to improve the trust in a wireless sensor network with...the system (silicon PUF) and the environment where the system operates (ENF signals). We also study how to improve the trust in a wireless sensor...Harbin Institute of Technology, Shenzhen , China, May 26, 2013. (Host: Prof. Aijiao Cui) 13) “Designing Trusted Energy-Efficient Circuits and Systems

  2. The Initial Development of a Computerized Operator Support System

    Energy Technology Data Exchange (ETDEWEB)

    Roger Lew; Ronald L Boring; Thomas A Ulrich; Ken Thomas

    2014-08-01

    A computerized operator support system (COSS) is a collection of resilient software technologies to assist operators in monitoring overall nuclear power plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast- moving, complex events. A prototype COSS for a chemical volume control system at a nuclear power plant has been developed in order to demonstrate the concept and provide a test bed for further research. The development process identified four underlying elements necessary for the prototype, which consist of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. An operational prototype resides at the Idaho National Laboratory (INL) using the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability (LWRS) Human Systems Simulation Laboratory (HSSL). Several human-machine interface (HMI) considerations are identified and incorporated in the prototype during this initial round of development.

  3. Effective operator formalism for open quantum systems

    DEFF Research Database (Denmark)

    Reiter, Florentin; Sørensen, Anders Søndberg

    2012-01-01

    We present an effective operator formalism for open quantum systems. Employing perturbation theory and adiabatic elimination of excited states for a weakly driven system, we derive an effective master equation which reduces the evolution to the ground-state dynamics. The effective evolution...

  4. Hankel Operators and Gramians for Nonlinear Systems

    NARCIS (Netherlands)

    Gray, W. Steven; Scherpen, Jacquelien M.A.

    1998-01-01

    In the theory for continuous-time linear systems, the system Hankel operator plays an important role in a number of realization problems ranging from providing an abstract notion of state to yielding tests for state space minimality and algorithms for model reduction. But in the case of continuous-t

  5. Current and Future Flight Operating Systems

    Science.gov (United States)

    Cudmore, Alan

    2007-01-01

    This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.

  6. The Application of the Distribution Supervisory system of Computer in Substation of 500 KV

    Institute of Scientific and Technical Information of China (English)

    Qi,Xinbo; Chang,Wenping

    2005-01-01

    This paper put forward a kind of new supervisory system of computer - the distribution supervisory system of computer. The system is arranged through scattering, stratified control, which is developmental, and with powerful mutual operating ness etc. The practical experience of the system for three years in Huajia (Henan) indicated that the system is reliable,safety, real-time and economy.

  7. An operator's views on Fermilab's control system

    Science.gov (United States)

    Baddorf, Debra S.

    1986-06-01

    A Fermilab accelerator operator presents views and personal opinions on the control system there. The paper covers features contributing to ease of use and comprehension, as well as a few things that could be improved. Included are such hardware as the trackball and interrupt button, the touch sensitive TV screen, the color Lexidata display, and black and white and color hardcopy capabilities. It also covers the software such as the generic parameter page, the generic plot package, and prepared displays. The alarm system is discussed from an operations standpoint, and also the datalogging system.

  8. UNIVERSAL INTERFACE TO MULTIPLE OPERATIONS SYSTEMS

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1986-01-01

    Alternative ways to provide access to operations systems that maintain, test, and configure complex telephone networks are being explored. It is suggested that a universal interface that provides simultaneous access to multiple operations systems that execute in different hardware and software...... environments, can be provided by an architecture that is based on the separation of presentation issues from application issues and on a modular interface management system that consists of a virtual user interface, physical user interface, and interface agent. The interface functionality that is needed...

  9. Advanced smartgrids for distribution system operators

    CERN Document Server

    Boillot, Marc

    2014-01-01

    The dynamic of the Energy Transition is engaged in many region of the World. This is a real challenge for electric systems and a paradigm shift for existing distribution networks. With the help of "advanced" smart technologies, the Distribution System Operators will have a central role to integrate massively renewable generation, electric vehicle and demand response programs. Many projects are on-going to develop and assess advanced smart grids solutions, with already some lessons learnt. In the end, the Smart Grid is a mean for Distribution System Operators to ensure the quality and the secu

  10. UNIVERSAL INTERFACE TO MULTIPLE OPERATIONS SYSTEMS

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1986-01-01

    Alternative ways to provide access to operations systems that maintain, test, and configure complex telephone networks are being explored. It is suggested that a universal interface that provides simultaneous access to multiple operations systems that execute in different hardware and software...... environments, can be provided by an architecture that is based on the separation of presentation issues from application issues and on a modular interface management system that consists of a virtual user interface, physical user interface, and interface agent. The interface functionality that is needed...

  11. Operator approach to linear control systems

    CERN Document Server

    Cheremensky, A

    1996-01-01

    Within the framework of the optimization problem for linear control systems with quadratic performance index (LQP), the operator approach allows the construction of a systems theory including a number of particular infinite-dimensional optimization problems with hardly visible concreteness. This approach yields interesting interpretations of these problems and more effective feedback design methods. This book is unique in its emphasis on developing methods for solving a sufficiently general LQP. Although this is complex material, the theory developed here is built on transparent and relatively simple principles, and readers with less experience in the field of operator theory will find enough material to give them a good overview of the current state of LQP theory and its applications. Audience: Graduate students and researchers in the fields of mathematical systems theory, operator theory, cybernetics, and control systems.

  12. Autonomous System Technologies for Resilient Airspace Operations

    Science.gov (United States)

    Houston, Vincent E.; Le Vie, Lisa R.

    2017-01-01

    Increasing autonomous systems within the aircraft cockpit begins with an effort to understand what autonomy is and developing the technology that encompasses it. Autonomy allows an agent, human or machine, to act independently within a circumscribed set of goals; delegating responsibility to the agent(s) to achieve overall system objective(s). Increasingly Autonomous Systems (IAS) are the highly sophisticated progression of current automated systems toward full autonomy. Working in concert with humans, these types of technologies are expected to improve the safety, reliability, costs, and operational efficiency of aviation. IAS implementation is imminent, which makes the development and the proper performance of such technologies, with respect to cockpit operation efficiency, the management of air traffic and data communication information, vital. A prototype IAS agent that attempts to optimize the identification and distribution of "relevant" air traffic data to be utilized by human crews during complex airspace operations has been developed.

  13. Rendezvous Facilities in a Distributed Computer System

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 金兰

    1995-01-01

    The distributed computer system described in this paper is a set of computer nodes interconnected in an interconnection network via packet-switching interfaces.The nodes communicate with each other by means of message-passing protocols.This paper presents the implementation of rendezvous facilities as high-level primitives provided by a parallel programming language to support interprocess communication and synchronization.

  14. Computer simulation of tritium systems for fusion technology

    Energy Technology Data Exchange (ETDEWEB)

    Gabowitsch, E.; Spannagel, G. (Kernforschungszentrum Karlsruhe GmbH Institut fur Datenverarbeitung in der Technik Postfach 3640, D-7500 Karlsruhe 1 (DE))

    1989-09-01

    The KATRIM computer code is presented. It calculates key values of tritium systems, especially those related to complete fuel cycles. First, a deterministic model is discussed. Then, a stochastic model is presented based on dynamic systems with different dynamic states, each with its own system of equations. Such an approach allows the modeling of reactors with different degrees of availability and/or different operational strategies. Results of simulations for different availabilities, variable frequencies of interruptions in reactor operation, and changing tritium burnup in the plasma are presented.

  15. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  16. Study of the asynchronous traction drive's operating modes by computer simulation. Part 1: Problem formulation and computer model

    Directory of Open Access Journals (Sweden)

    Pavel KOLPAHCHYAN

    2015-06-01

    Full Text Available In this paper, the problems arising from the design of electric locomotives with asynchronous traction drive (with three-phase AC induction motors are considered as including the debugging of control algorithms. The electrical circuit provides the individual (by axle control of traction motors. This allows realizing the operational disconnection/connection of one or more axles in the automatic mode, with account of actual load. In perspective, the evaluation of locomotive's energy efficiency at the realization of various control algorithms must be obtained. Another objective is to research the dynamic processes in various modes of the electric locomotive operation (start and acceleration, traction regime, coasting movement, wheel-slide protection, etc. To solve these problems, a complex computer model based on the representation of AC traction drive as controlled electromechanical system, is developed in Part 1. The description of methods applied in modeling of traction drive elements (traction motors, power converters, control systems, as well as of mechanical part and of "wheel-rail" contact, is given. The control system provides the individual control of the traction motors. Part 2 of the paper focuses on the results of dynamic processes modeling in various modes of electric locomotive operation.

  17. Embedded and real-time operating systems

    CERN Document Server

    Wang, K C

    2017-01-01

    This book covers the basic concepts and principles of operating systems, showing how to apply them to the design and implementation of complete operating systems for embedded and real-time systems. It includes all the foundational and background information on ARM architecture, ARM instructions and programming, toolchain for developing programs, virtual machines for software implementation and testing, program execution image, function call conventions, run-time stack usage and link C programs with assembly code. It describes the design and implementation of a complete OS for embedded systems in incremental steps, explaining the design principles and implementation techniques. For Symmetric Multiprocessing (SMP) embedded systems, the author examines the ARM MPcore processors, which include the SCU and GIC for interrupts routing and interprocessor communication and synchronization by Software Generated Interrupts (SGIs). Throughout the book, complete working sample systems demonstrate the design principles and...

  18. Naming in the Distributed Operating System ZGL

    Institute of Scientific and Technical Information of China (English)

    薛行; 孙钟秀

    1991-01-01

    In this paper,the naming scheme used in the heterogeneous distributed operating system ZGL is described and some of the representative techniques utilized in current distributed operating systems are examined.It is believed that the partitioning of the name space into manyn local name spaces and one global shared name space allows the ZGL system to satisfy each workstation's demand for local autonomy and still be able to facilitate transparent resource sharing.By the division of the system into clusters and the use of a combined centralized-distributed naming mechanism,the system is able to avoid both the bottleneck problem caused by a single centralized name server for the whole system and the performance degradation due to a full distributed scheme.

  19. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  20. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  1. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  2. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  3. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    Science.gov (United States)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  4. Computer support for mechatronic control system design

    NARCIS (Netherlands)

    van Amerongen, J.; Coelingh, H.J.; de Vries, Theodorus J.A.

    2000-01-01

    This paper discusses the demands for proper tools for computer aided control system design of mechatronic systems and identifies a number of tasks in this design process. Real mechatronic design, involving input from specialists from varying disciplines, requires that the system can be represented

  5. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  6. ADAPTING LINUX AS MOBILE OPERATING SYSTEM

    Directory of Open Access Journals (Sweden)

    Kaushik Velusamy

    2013-01-01

    Full Text Available In this fast growing world, people are increasingly mobile; everything is fast, connected and highly secured. All these have put up the requirements on mobile devices and leads to several features being added in the mobile operating systems and its architecture. The development of the next generation software platform based on Linux for mobile phones provides enhanced user experience, power management, cloud support and openness in the design. In spite of many studies on Linux, the investigations on the challenges and benefits of reusing and adapting the Linux kernel to mobile platforms is very less. In this study, a study on architecture of the Linux, its adaptations for a mobile operating system, requirements and analysis for Linux mobile phones, comparison with android and solution technologies to satisfy the requirements for a Linux mobile operating system are analysed and discussed."

  7. Computer analyses for the design, operation and safety of new isotope production reactors: A technology status review

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, W.

    1990-01-01

    A review is presented on the currently available technologies for nuclear reactor analyses by computer. The important distinction is made between traditional computer calculation and advanced computer simulation. Simulation needs are defined to support the design, operation, maintenance and safety of isotope production reactors. Existing methods of computer analyses are categorized in accordance with the type of computer involved in their execution: micro, mini, mainframe and supercomputers. Both general and special-purpose computers are discussed. Major computer codes are described, with regard for their use in analyzing isotope production reactors. It has been determined in this review that conventional systems codes (TRAC, RELAP5, RETRAN, etc.) cannot meet four essential conditions for viable reactor simulation: simulation fidelity, on-line interactive operation with convenient graphics, high simulation speed, and at low cost. These conditions can be met by special-purpose computers (such as the AD100 of ADI), which are specifically designed for high-speed simulation of complex systems. The greatest shortcoming of existing systems codes (TRAC, RELAP5) is their mismatch between very high computational efforts and low simulation fidelity. The drift flux formulation (HIPA) is the viable alternative to the complicated two-fluid model. No existing computer code has the capability of accommodating all important processes in the core geometry of isotope production reactors. Experiments are needed (heat transfer measurements) to provide necessary correlations. It is important for the nuclear community, both in government, industry and universities, to begin to take advantage of modern simulation technologies and equipment. 41 refs.

  8. Nuclearity Related Properties in Operator Systems

    CERN Document Server

    Kavruk, Ali Samil

    2011-01-01

    Some recent research on the tensor products of operator systems and ensuing nuclearity properties in this setting raised many stability problems. In this paper we examine the preservation of these nuclearity properties including exactness, local liftibility and double commutant expectation property under basic algebraic operations such as quotient, duality, coproducts and tensor products. We show that, in the ?finite dimensional case, exactness and lifting property are dual pairs, that is, an operator system $S$ is exact if and only if the dual operator system $S^d$ has the lifting property. Moreover, the lifting property is preserved under quotients by null subspaces. Again in the finite dimensional case we prove that every operator system has k-lifting property in the sense that whenever $f:S -> A/I$ is a ucp map, where A is a C*-algebra and I is an ideal, then $f$ possess a unital k-positive lift on A, for every k. This property provides a novel proof of a classical result of Smith and Ward on the preserva...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  10. A development environment for operational concepts and systems engineering analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Raybourn, Elaine Marie; Senglaub, Michael E.

    2004-03-01

    The work reported in this document involves a development effort to provide combat commanders and systems engineers with a capability to explore and optimize system concepts that include operational concepts as part of the design effort. An infrastructure and analytic framework has been designed and partially developed that meets a gap in systems engineering design for combat related complex systems. The system consists of three major components: The first component consists of a design environment that permits the combat commander to perform 'what-if' types of analyses in which parts of a course of action (COA) can be automated by generic system constructs. The second component consists of suites of optimization tools designed to integrate into the analytical architecture to explore the massive design space of an integrated design and operational space. These optimization tools have been selected for their utility in requirements development and operational concept development. The third component involves the design of a modeling paradigm for the complex system that takes advantage of functional definitions and the coupled state space representations, generic measures of effectiveness and performance, and a number of modeling constructs to maximize the efficiency of computer simulations. The system architecture has been developed to allow for a future extension in which the operational concept development aspects can be performed in a co-evolutionary process to ensure the most robust designs may be gleaned from the design space(s).

  11. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  12. Operational development of small plant growth systems

    Science.gov (United States)

    Scheld, H. W.; Magnuson, J. W.; Sauer, R. L.

    1986-01-01

    The results of a study undertaken on the first phase of an empricial effort in the development of small plant growth chambers for production of salad type vegetables on space shuttle or space station are discussed. The overall effort is visualized as providing the underpinning of practical experience in handling of plant systems in space which will provide major support for future efforts in planning, design, and construction of plant-based (phytomechanical) systems for support of human habitation in space. The assumptions underlying the effort hold that large scale phytomechanical habitability support systems for future space stations must evolve from the simple to the complex. The highly complex final systems will be developed from the accumulated experience and data gathered from repetitive tests and trials of fragments or subsystems of the whole in an operational mode. These developing system components will, meanwhile, serve a useful operational function in providing psychological support and diversion for the crews.

  13. Operational experience with the CEBAF control system

    Energy Technology Data Exchange (ETDEWEB)

    Hovater, C.; Chowdhary, M.; Karn, J.; Tiefenback, M.; Zeijts, J. van; Watson, W.

    1996-10-01

    The CEBAF accelerator at Thomas Jefferson National Accelerator Facility (Jefferson Lab) successfully began its experimental nuclear physics program in November of 1995 and has since surpassed predicted machine availability. Part of this success can be attributed to using the EPICS (Experimental Physics and Industrial Control System) control system toolkit. The CEBAF control system is one of the largest accelerator control system now operating. It controls approximately 338 SRF cavities, 2,300 magnets, 500 beam position monitors and other accelerator devices, such as gun hardware and other beam monitoring devices. All told, the system must be able to access over 125,000 database records. The system has been well received by both operators and the hardware designers. The EPICS utilities have made the task of troubleshooting systems easier. The graphical and test-based creation tools have allowed operators to custom build control screens. In addition, the ability to integrate EPICS with other software packages, such as Tcl/Tk, has allowed physicists to quickly prototype high-level application programs, and to provide GUI front ends for command line driven tools. Specific examples of the control system applications are presented in the areas of energy and orbit control, cavity tuning and accelerator tune up diagnostics.

  14. Differential Characteristics and Methods of Operation Underlying CAI/CMI Drill and Practice Systems.

    Science.gov (United States)

    Hativa, Nira

    1988-01-01

    Describes computer systems that combine drill and practice instruction with computer-managed instruction (CMI) and identifies system characteristics in four categories: (1) hardware, (2) software, (3) management systems, and (4) methods of daily operation. Topics discussed include microcomputer networks, graphics, feedback, degree of learner…

  15. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    Science.gov (United States)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  16. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  17. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  18. Review on open source operating systems for internet of things

    Science.gov (United States)

    Wang, Zhengmin; Li, Wei; Dong, Huiliang

    2017-08-01

    Internet of Things (IoT) is an environment in which everywhere and every device became smart in a smart world. Internet of Things is growing vastly; it is an integrated system of uniquely identifiable communicating devices which exchange information in a connected network to provide extensive services. IoT devices have very limited memory, computational power, and power supply. Traditional operating systems (OS) have no way to meet the needs of IoT systems. In this paper, we thus analyze the challenges of IoT OS and survey applicable open source OSs.

  19. IBM PC/IX operating system evaluation plan

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  20. Fermionic quantum operations: a computational framework I. Basic invariance properties

    OpenAIRE

    Lakos, Gyula

    2015-01-01

    The objective of this series of papers is to recover information regarding the behaviour of FQ operations in the case $n=2$, and FQ conform-operations in the case $n=3$. In this first part we study how the basic invariance properties of FQ operations ($n=2$) are reflected in their formal power series expansions.