WorldWideScience

Sample records for mainframe computers

  1. Computing - moving away from the mainframe

    International Nuclear Information System (INIS)

    Williams, David

    1994-01-01

    After some 20 years of valiant service providing the mainstay computing services, machines based on a mainframe architecture are beginning to show signs of age. With the advent of personal computers, less expensive hardware and increased networking, mainframe systems providing a range of services for hundreds, even thousands, of users are being discarded in favour of distributed computing solutions. During these twenty years, the traditional mainframe provided the inherent integration or 'glue' for major computing environments. This was particularly the case with high energy physics laboratories, handling enormous quantities of data. At CERN, the VM system (CERNVM) was, and still is, an integral part of CERN's day-to-day working environment, with some 15,000 tape mounts per week and a thousand logged-on users at peak periods

  2. (Some) Computer Futures: Mainframes.

    Science.gov (United States)

    Joseph, Earl C.

    Possible futures for the world of mainframe computers can be forecast through studies identifying forces of change and their impact on current trends. Some new prospects for the future have been generated by advances in information technology; for example, recent United States successes in applied artificial intelligence (AI) have created new…

  3. Development of the JFT-2M data analysis software system on the mainframe computer

    International Nuclear Information System (INIS)

    Matsuda, Toshiaki; Amagai, Akira; Suda, Shuji; Maemura, Katsumi; Hata, Ken-ichiro.

    1990-11-01

    We developed software system on the FACOM mainframe computer to analyze JFT-2M experimental data archived by JFT-2M data acquisition system. Then we can reduce and distribute the CPU load of the data acquisition system. And we can analyze JFT-2M experimental data by using complicated computational code with raw data, such as equilibrium calculation and transport analysis, and useful software package like SAS statistic package on the mainframe. (author)

  4. Integrating Mainframe Data Bases on a Microcomputer

    OpenAIRE

    Marciniak, Thomas A.

    1985-01-01

    Microcomputers support user-friendly software for interrogating their resident data bases. Many medical data bases currently consist of files on less accessible mainframe computers with more limited inquiry capabilities. We discuss the transferring and integrating of mainframe data into microcomputer data base systems in one medical environment.

  5. The role of the mainframe terminated : mainframe versus workstation

    CERN Document Server

    Williams, D O

    1991-01-01

    I. What mainframes? - The surgeon-general has determined that you shall treat all costs with care ( continental effects, discounts assumed, next month's or last month's prices, optimism of the reporter. II. Typical mainframe hardware III. Typical mainframe software IV. What workstations? VI. Typical workstation hardware VII. Typical workstation software VIII. Titan vs PDP-7s XIX.Historic answer X. Amdahl's Law....

  6. Advancement of the state system of accounting for mainframe to personal computer (PC) technology

    International Nuclear Information System (INIS)

    Proco, G.; Nardi, J.

    1999-01-01

    The advancement of the U.S. government's state system of accounting from a mainframe computer to a personal computer (PC) had been successfully completed. The accounting system, from 1965 until 1995 a mainframe application, was replaced in September 1995 by an accounting system employing local area network (LAN) capabilities and other state-of-the-art characteristics. The system is called the Nuclear Materials Management and Safeguards System (NMMSS), tracking nuclear material activities and providing accounting reports for a variety of government and private users. The uses of the system include not only the tracking of nuclear materials for international and domestic safeguards purposes but also serving to facilitate the government's resource management purposes as well. The system was converted to PC hardware and fourth generation software to improve upon the mainframe system. The change was motivated by the desire to have a system amenable to frequent modifications, to improve upon services to users and to reduce increasing operating costs. Based on two years of operating the new system, it is clear that these objectives were met. Future changes to the system are inevitable and the national system of accounting for nuclear materials has the technology base to meet the challenges with proven capability. (author)

  7. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  8. What on earth is a mainframe ? an introduction to IBM zSeries mainframes and z/OS operating systems for total beginners

    CERN Document Server

    Stephens, David

    2008-01-01

    Looking for a "Mainframes for Beginners" book? Need to learn about z/OS fast? Then this is the book you need. This is the perfect introduction to IBM System z Mainframes and z/OS. Avoiding technical jargon, it gives you the basic facts in clear, light-hearted, entertaining English. You'll quickly learn what Mainframes are, what they do, what runs on them, and terms and terminology you need to speak Mainframe-ese. But it's not all technical. There's also invaluable information about the people that work on Mainframes, Mainframe management issues, new Mainframe trends, and other facts that don't seem to be written down anywhere else. What On Earth is a Mainframe is the closest you'll get to a "Mainframes for Dummies" book. Programmers, managers, recruitment consultants, and industry commentators will all find this book their new best friend when trying to understand the Mainframe world.

  9. COMET: A System for Micro, Mini, and Mainframe Environments

    OpenAIRE

    O'Neill, Pat; Volkert, J. Jay; Koop, Gerald O.

    1983-01-01

    The exploding technology in micro and personal computers has stimulated knowledgeable occupational health professionals to examine their potential applications in their own work. Commercially available health surveillance systems are currently being offered in large minicomputers or mainframes. Does the revolution in hardware technology now mean that a comprehensive occupational health system can be supported by a small inexpensive computer? Such a machine-independent information system has b...

  10. From micro to mainframe. A practical approach to perinatal data processing.

    Science.gov (United States)

    Yeh, S Y; Lincoln, T

    1985-04-01

    A new, practical approach to perinatal data processing for a large obstetric population is described. This was done with a microcomputer for data entry and a mainframe computer for data reduction. The Screen Oriented Data Access (SODA) program was used to generate the data entry form and to input data into the Apple II Plus computer. Data were stored on diskettes and transmitted through a modern and telephone line to the IBM 370/168 computer. The Statistical Analysis System (SAS) program was used for statistical analyses and report generations. This approach was found to be most practical, flexible, and economical.

  11. An ORIGEN-2 update for PCs and mainframes

    International Nuclear Information System (INIS)

    Ludwig, S.B.

    1992-01-01

    This paper reports that an updated version of the ORIGEN2 code package has been prepared by Oak Ridge National Laboratory. ORIGEN2 is used extensively by the DOE office of Civilian Radioactive Waste Management (OCRWM) and its contractors to determine the characteristics of spent fuel and high-level radioactive waste due to irradiation, decay, and processing. Included in this update are revised ORIGEN2 cross-section libraries for standard- and extended-burnup PWRs and BWRs. This update also includes improvements to the ORIGEN2 computer code (designated as ORIGEN2, Version 2.1 8-1-91 release). This version of ORIGEN2 provides a single source code that may be executed on both mainframes and 80386 or 80486 PCs, effectively smashing the 640 KB barrier that limited previous PC implementations

  12. ID-1 mass storage system for mainframe by using FDDI network

    International Nuclear Information System (INIS)

    Morita, Y.; Fujii, H.; Inoue, E.; Kodama, H. Manabe, A.; Miyamoto, A.; Nomachi, M.; Watase, Y.; Yasu, Y.

    1994-01-01

    The authors have developed an ID-1 mass storage system as a distributed data server for Fujitsu mainframe computers. The system consists of a SONY ID-1 recorder DIR-1000, a tape robot system DMS-24 and a SCSI-II interface DFC-1500, which are connected to Spar Station 10 with an FDDI interface. The maximum speed of 7.5 Mbytes/sec is achieved for data transfer between Sparc Station 10 memory and DIR-1000 with a buffer size of 1 Mbytes. The system has been used successfully since last October to migrate more than 1 Tbytes data

  13. IBM mainframe security beyond the basics : a practical guide from a z/OS and RACF perspective

    CERN Document Server

    Dattani, Dinesh D

    2013-01-01

    Rather than rehashing basic information-such as command syntax-already available in other publications, this book focuses on important security and audit issues, business best practices, and compliance, discussing the important issues in IBM mainframe security. Mainframes are the backbone of most large IT organizations; security cannot be left to chance. With very little training available to the younger crowd, and older, more experienced personnel retiring or close to retiring, there is a need in mainframe security skills at the senior level. Based on real-life experiences, issues, and soluti

  14. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  15. '95 computer system operation project

    International Nuclear Information System (INIS)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new

  16. An ORIGEN2 update for PCs and mainframes

    International Nuclear Information System (INIS)

    Ludwig, S.B.

    1991-01-01

    The ORIGEN2 computer code was developed by Oak Ridge National Laboratory (ORNL) in the late 1970s and made available to users worldwide in 1980 through the Radiation Shielding Information Center (RSIC). The purpose of ORIGEN2 is to calculate the buildup, decay, and processing of radioactive materials. Since 1980, more than 500 users have acquired the ORIGEN2 code from RSIC. ORIGEN2 is often the starting point for many analyses involving the shielding, criticality, safety, performance assessment, accident analysis, risk assessment, and design of reactors, transportation casks, reprocessing plants, and waste disposal facilities. An updated version of the ORIGEN2 code package has been prepared by ORNL. This new version of the ORIGEN2 package (called ORIGEN2, Version 2.1 (8-1-91)) includes revised ORIGEN2 cross-section libraries for standard- and extended-burnup pressurized-water and boiling-water reactors (PWRs and BWRs). These libraries were used in the preparation of Revision 1 of the Characteristic Data Base (CDB). The ORIGEN2 code has also been revised to include numerous maintenance fixes and to consolidate various hardware versions into a single source code capable of execution of both 80386/80486 PCs as well as most mainframe computers; these improvements are part of Version 2.1. 6 refs., 2 tabs

  17. open-quotes Shift-Betelclose quotes: A (very) distributed mainframe

    International Nuclear Information System (INIS)

    Segal, B.; Martin, O.; Hassine, F.; Hemmer, F.; Jouanigot, J.M.

    1994-01-01

    Over the last four years, CERN has progressively converted its central batch production facilities from classic mainframe platforms (Cray XMP, IBM, ESA, Vax 9000) to distributed RISC based facilities, which have now attained a very large size. Both a CPU-intensive system (open-quotes CSFclose quotes, the Central Simulation Facility) and an I/O-intensive system (open-quotes SHIFTclose quotes, the Scaleable Heterogeneous Integrated Facility) have been developed, plus a distributed data management subsystem allowing seamless access to CERN'S central tape store and to large amounts of economical disk space. The full system is known as open-quotes COREclose quotes, the Centrally Operated Risc Environment; at the time of writing CORE comprises around 2000 CERN Units of Computing (about 8000 MIPs) and over a TeraByte of online disk space. This distributed system is connected using standard networking technologies (IP protocols over Ethernet, FDDI and UltraNet), but which until quite recently were only implemented at sufficiently high speed in the Local Area

  18. Conversion of a mainframe simulation for maintenance performance to a PC environment

    International Nuclear Information System (INIS)

    Gertman, D.I.

    1991-01-01

    A computer-based simulation capable of generating human error probabilities (HEPs) for maintenance activities is presented. The HEPs are suitable for use in probabilistic risk assessments (PRAs) and are an important source of information for data management systems such as NUCLARR- the Nuclear Computerized Library for Assessing Reactor Reliability. (1) The basic computer model MAPPS--the maintenance personnel performance simulation has been developed and validated by the US NRC in order to improve maintenance practices and procedures at nuclear power plants. This model validated previously, has now been implemented and improved, in a PC environment, and renamed MicroMAPPS. The model is stochastically based, able to simulate the performance of 2 to 15 person crews for a variety of maintenance conditions. These conditions include aspects of crew actions as potentially influenced by the task, the environment, or characteristics of the personnel involved. The nature of the software code makes it particularly appropriate for determining changes in HEP rates due to fluctuations in important task, environment,. or personnel parameters. The presentation presents a brief review of the mainframe version of the code and presents a summarization of the enhancements which dramatically change the nature of the human computer interaction

  19. Different perspectives on the use of personal computers for technical analyses

    International Nuclear Information System (INIS)

    Libby, R.A.; Doherty, A.L.

    1986-01-01

    Personal computers (PCs) have widespread availability and use in many technical environments. The machines may have initially been justified for use as word processors or for data base management, but many technical applications are being performed and often the computer codes used in these technical analyses have been moved from large mainframe machines. The general feeling in the user community is that the free computer time on these machines justifies moving as many applications as possible from the large computer systems. Many of these PC applications cannot be justified if the total cost of using microcomputers is considered. A Hanford-wide local area network (LAN) is being established which allows individual PCs to be used as terminals to connect to mainframe computers at high data transfer rates (9600 baud). This system allows fast, easy connection to a variety of different computers with a few keystrokes. The LAN eliminates the problem of low-speed communication with mainframe computers and makes operation on the mainframes as simple as operation on the host PC, itself

  20. COSYMA, a mainframe and PC program package for assessing the consequences of hypothetical accidents

    International Nuclear Information System (INIS)

    Jones, J.A.; Hasemann, I.; Steen, J. van der

    1996-01-01

    COSYMA (Code System from MARIA) is a program package for assessing the off-site consequences of accidental releases of radioactive material to atmosphere, developed as part of the European Commission's MARIA programme (Methods for Assessing the Radiological Impact of Accidents). COSYMA represents a fusion of ideas and modules from the Forschungszetrum Karlsruhe program system UFOMOD, the National Radiological Protection Board program MARC and new model developments together with data libraries from other MARIA contractors. Mainframe and PC versions of COSYMA are distributed to interested users by arrangement with the European Commission. The system was first released in 1990, and has subsequently been updated. The program system uses independent modules for the different parts of the analysis, and so permits a flexible problem-oriented application to different sites, source terms, emergency plans and the needs of users in the various parts of Europe. Users of the mainframe system can choose the most appropriate combination of modules for their particular application. The PC version includes a user interface which selects the required modules for the endpoints specified by the user. This paper describes the structure of the mainframe and PC versions of COSYMA, and summarises the models included in them. The mainframe or PC versions of COSYMA have been distributed to more than 100 organisations both inside and outside the European Union, and have been used in a wide variety of applications. These range from full PRA level 3 analyses of nuclear power and research reactors to investigations on advanced containment concepts and the preplanning of off-site emergency actions. Some of the experiences from these applications are described in the paper. An international COSYMA user group has been established to stimulate communication between the owners, developers and users of the code and to serve as a reference point for questions relating to the code. The group produces

  1. Classic Multi-Configuration-Dirac-Fock and Hartree-Fock-Relativistic methods integrated into a program package for the RAL-IBM mainframe with automatic comparative output

    International Nuclear Information System (INIS)

    Cowan, R.D.; Grant, I.P.; Fawcett, B.C.; Rose, S.J.

    1985-11-01

    A Multi-Configuration-Dirac-Fock (MCDF) computer program is adapted to interface with the Hartree-Fock-Relativistic (HFR) program for the RAL IBM mainframe computer. The two codes are integrated into a package which includes the Zeeman Laboratory Slater parameter optimisation routines as well as new RAL routines to further process the HFR and MCDF output. A description of the adaptions to MCDF and new output extensions is included in this report, and details are given regarding HFR FORTRAN subroutines, and lists of Job Control Language (JCL) files for the complete package. (author)

  2. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  3. Evolution of Cloud Computing and Enabling Technologies

    OpenAIRE

    Rabi Prasad Padhy; Manas Ranjan Patra

    2012-01-01

    We present an overview of the history of forecasting software over the past 25 years, concentrating especially on the interaction between computing and technologies from mainframe computing to cloud computing. The cloud computing is latest one. For delivering the vision of  various  of computing models, this paper lightly explains the architecture, characteristics, advantages, applications and issues of various computing models like PC computing, internet computing etc and related technologie...

  4. Computer ray tracing speeds.

    Science.gov (United States)

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  5. Computer science a concise introduction

    CERN Document Server

    Sinclair, Ian

    2014-01-01

    Computer Science: A Concise Introduction covers the fundamentals of computer science. The book describes micro-, mini-, and mainframe computers and their uses; the ranges and types of computers and peripherals currently available; applications to numerical computation; and commercial data processing and industrial control processes. The functions of data preparation, data control, computer operations, applications programming, systems analysis and design, database administration, and network control are also encompassed. The book then discusses batch, on-line, and real-time systems; the basic

  6. AI tools in computer based problem solving

    Science.gov (United States)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  7. The reactor physics computer programs in PC's era

    International Nuclear Information System (INIS)

    Nainer, O.; Serghiuta, D.

    1995-01-01

    The main objective of reactor physics analysis is the evaluation of flux and power distribution over the reactor core. For CANDU reactors sophisticated computer programs, such as FMDP and RFSP, were developed 20 years ago for mainframe computers. These programs were adapted to work on workstations with UNIX or DOS, but they lack a feature that could improve their use and that is 'user friendly'. For using these programs the users need to deal with a great amount of information contained in sophisticated files. To modify a model is a great challenge. First of all, it is necessary to bear in mind all the geometrical dimensions and accordingly, to modify the core model to match the new requirements. All this must be done in a line input file. For a DOS platform, using an average performance PC system, could it be possible: to represent and modify all the geometrical and physical parameters in a meaningful way, on screen, using an intuitive graphic user interface; to reduce the real time elapsed in order to perform complex fuel-management analysis 'at home'; to avoid the rewrite of the mainframe version of the program? The author's answer is a fuel-management computer package operating on PC, 3 time faster than on a CDC-Cyber 830 mainframe one (486DX/33MHz/8MbRAM) or 20 time faster (Pentium-PC), respectively. (author). 5 refs., 1 tab., 5 figs

  8. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  9. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  10. Socio-Technical Implementation: Socio-technical Systems in the Context of Ubiquitous Computing, Ambient Intelligence, Embodied Virtuality, and the Internet of Things

    NARCIS (Netherlands)

    Nijholt, Antinus; Whitworth, B.; de Moor, A.

    2009-01-01

    In which computer science world do we design and implement our socio-technical systems? About every five or ten years new computer and interaction paradigms are introduced. We had the mainframe computers, the various generations of computers, including the Japanese fifth generation computers, the

  11. Evaluation of mini super computers for nuclear design applications

    International Nuclear Information System (INIS)

    Altomare, S.; Baradari, F.

    1987-01-01

    The evolution of the mini super computers will force changes from the current environment of performing nuclear design calculations on mainframe computers (such as a CRAY) to mini super computers. This change will come about for a number of reasons. First, the mini super computers currently available in the marketplace offer the power and speed comparable to mainframes and can provide the capability to support highly computer intensive calculations. Second, the equipment is physically smaller and can easily be installed and operated without extensive investments in facilities and operations support. Third, the computer capacity can be acquired with as much needed memory, disk, and tape capacity as may be needed. Another reasons is that the performance/cost ratio has increased drastically as hardware costs have decreased. A study was conducted at the Westinghouse Commercial Nuclear Fuel Division (CNFD) to evaluate the mini super computers for use in nuclear core design. As a result of this evaluation, Westinghouse CNFD is offering a combined hardware/software technology transfer package for core design. This package provides the utility designer with a totally dedicated mini super computer comparable in speed to the CRAY 1S with sufficient capacity for a sizable design group to perform the engineering activities related to nuclear core design and operations support. This also assures the utility of being totally compatible with the CNFD design codes, thus assuring total update compatibility

  12. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  13. Cloud Computing: A study of cloud architecture and its patterns

    OpenAIRE

    Mandeep Handa,; Shriya Sharma

    2015-01-01

    Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Cloud computing can be defined as accessing third party software and services on web and paying as per usage. It facilitates scalability and virtualized resources over Internet as a service providing cost effective and scalable solution to customers. Cloud computing has...

  14. Cloud Computing and the Power to Choose

    Science.gov (United States)

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  15. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  16. Poisson/Superfish codes for personal computers

    International Nuclear Information System (INIS)

    Humphries, S.

    1992-01-01

    The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs

  17. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  18. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  19. PRO/Mapper: a plotting program for the DEC PRO/300 personal computers utilizing the MAPPER graphics language

    International Nuclear Information System (INIS)

    Wachter, J.W.

    1986-05-01

    PRO/Mapper is an application for the Digital Equipment Corporation PRO/300 series of personal computers that facilitates the preparation of visuals such as graphs, charts, and maps in color or black and white. The user prepares an input data file containing English-language commands and writes it into a file using standard editor. PRO/Mapper then reads these files and draws graphs, maps, boxes, and complex line segments onto the computer screen. Axes, curves, and error bars may be plotted in graphical presentations. The commands of PRO/Mapper are a subset of the commands of the more sophisticated MAPPER program written for mainframe computers. The PRO/Mapper commands were chosen primarily for the production of linear graphs. Command files written for the PRO/300 are upward compatible with the Martin Marietta Energy Systems version of MAPPER and can be used to produce publication-quality slides, drawings, and maps on the various output devices of the Oak Ridge National Laboratory mainframe computers

  20. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  1. Computer aided dispatch - a discussion of how communications technology is used to improve customer service

    International Nuclear Information System (INIS)

    Swailes, J.B.

    1995-01-01

    The use of Computer Aided Dispatch (CAD) for information exchange between the corporate information network and the field personnel of the CU Gas Group was described. CAD was described as two interconnected systems; a large mainframe software system which interfaces with the radio transmission system as well as the Customer Information System (CIS), and a mini computer system which consists of mobile data terminals (MDTs), the radio hardware and the software necessary to communicate with the MDTs. Further details were given on the functionality of both the mainframe, as well as the mini computer system. Some of the functions performed by the CAD include call taking, employee calendar, work assignment, filling work, dispatching, reporting, correcting errors, providing order status information, fleet monitoring, appointments, and many others. The implementation process, the challenges faced by the communications team and their interactions with the organization was also described. The most important benefit was claimed to be improved customer service. The significance of that to corporate strategy was stressed

  2. In-House Automation of a Small Library Using a Mainframe Computer.

    Science.gov (United States)

    Waranius, Frances B.; Tellier, Stephen H.

    1986-01-01

    An automated library routine management system was developed in-house to create system unique to the Library and Information Center, Lunar and Planetary Institute, Houston, Texas. A modular approach was used to allow continuity in operations and services as system was implemented. Acronyms and computer accounts and file names are appended.…

  3. Nuclear Plant Analyzer desktop workstation: An integrated interactive simulation, visualization and analysis tool

    International Nuclear Information System (INIS)

    Beelman, R.J.

    1991-01-01

    The advanced, best-estimate, reactor thermal-hydraulic codes were originally developed as mainframe computer applications because of speed, precision, memory and mass storage requirements. However, the productivity of numerical reactor safety analysts has historically been hampered by mainframe dependence due to limited mainframe CPU allocation, accessibility and availability, poor mainframe job throughput, and delays in obtaining and difficulty comprehending printed numerical results. The Nuclear Plant Analyzer (NPA) was originally developed as a mainframe computer-graphics aid for reactor safety analysts in addressing the latter consideration. Rapid advances in microcomputer technology have since enabled the installation and execution of these reactor safety codes on desktop computers thereby eliminating mainframe dependence. The need for a complementary desktop graphics display generation and presentation capability, coupled with the need for software standardization and portability, has motivated the redesign of the NPA as a UNIX/X-Windows application suitable for both mainframe and microcomputer

  4. The SIMRAND 1 computer program: Simulation of research and development projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The SIMRAND I Computer Program (Version 5.0 x 0.3) written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles is described. The SIMRAND I Computer Program comprises eleven modules-a main routine and ten subroutines. Two additional files are used at compile time; one inserts the system or task equations into the source code, while the other inserts the dimension statements and common blocks. The SIMRAND I Computer Program can be run on most microcomputers or mainframe computers with only minor modifications to the computer code.

  5. An Assessment of Security Vulnerabilities Comprehension of Cloud Computing Environments: A Quantitative Study Using the Unified Theory of Acceptance and Use

    Science.gov (United States)

    Venkatesh, Vijay P.

    2013-01-01

    The current computing landscape owes its roots to the birth of hardware and software technologies from the 1940s and 1950s. Since then, the advent of mainframes, miniaturized computing, and internetworking has given rise to the now prevalent cloud computing era. In the past few months just after 2010, cloud computing adoption has picked up pace…

  6. The First 25 Years of Computers in Education in Poland: 1965 – 1990

    OpenAIRE

    Sysło , Maciej ,

    2014-01-01

    International audience; The first regular informatics lessons in schools were organised in Poland in the second half of the 1960s. Some of the lessons in Wrocław were devoted to programming a mainframe computer located at the university, and school students in Warsaw had a chance to learn theoretical models of computers and foundations of computations.In the mid-1970s, the government of Poland recognised the importance of computers in the state economy and also in preparing the society for ne...

  7. Transient analysis capabilities at ABB-CE

    International Nuclear Information System (INIS)

    Kling, C.L.

    1992-01-01

    The transient capabilities at ABB-Combustion Engineering (ABB-CE) Nuclear Power are a function of the computer hardware and related network used, the computer software that has evolved over the years, and the commercial technical exchange agreements with other related organizations and customers. ABB-CEA is changing from a mainframe/personal computer network to a distributed workstation/personal computer local area network. The paper discusses computer hardware, mainframe computing, personal computers, mainframe/personal computer networks, workstations, transient analysis computer software, design/operation transient analysis codes, safety (licensed) analysis codes, cooperation with ABB-Atom, and customer support

  8. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  9. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  10. Computing for particle physics. Report of the HEPAP subpanel on computer needs for the next decade

    International Nuclear Information System (INIS)

    1985-08-01

    The increasing importance of computation to the future progress in high energy physics is documented. Experimental computing demands are analyzed for the near future (four to ten years). The computer industry's plans for the near term and long term are surveyed as they relate to the solution of high energy physics computing problems. This survey includes large processors and the future role of alternatives to commercial mainframes. The needs for low speed and high speed networking are assessed, and the need for an integrated network for high energy physics is evaluated. Software requirements are analyzed. The role to be played by multiple processor systems is examined. The computing needs associated with elementary particle theory are briefly summarized. Computing needs associated with the Superconducting Super Collider are analyzed. Recommendations are offered for expanding computing capabilities in high energy physics and for networking between the laboratories

  11. Calculation of transmission and other functionals from evaluated data in ENDF format by means of personal computers

    International Nuclear Information System (INIS)

    Vertes, P.

    1991-04-01

    The FDMXPC program package was developed on the basis of the program system FEDMIX written for mainframe computers. The new program package for personal computers was developed for the interpretation of neutron transmission experiments and for producing group averaged infinite diluted and self-shielded cross sections, starting from evaluated data in ENDF format. The package was written for different FORTRAN compilers residing in personal computers under MS-DOS. (R.P.) 12 refs

  12. Distributed computing environment for Mine Warfare Command

    OpenAIRE

    Pritchard, Lane L.

    1993-01-01

    Approved for public release; distribution is unlimited. The Mine Warfare Command in Charleston, South Carolina has been converting its information systems architecture from a centralized mainframe based system to a decentralized network of personal computers over the past several years. This thesis analyzes the progress Of the evolution as of May of 1992. The building blocks of a distributed architecture are discussed in relation to the choices the Mine Warfare Command has made to date. Ar...

  13. Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.

    Science.gov (United States)

    LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q

    2009-09-01

    In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.

  14. The RANDOM computer program: A linear congruential random number generator

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  15. RETRAN-02 installation and verification for the CRAY computer

    International Nuclear Information System (INIS)

    1990-03-01

    The RETRAN-02 transient thermal-hydraulic analysis program developed by the Electric Power Research Institute (EPRI) has been selected as a tool for use in assessing the operation and safety of the SP-100 space reactor system being developed at Los Alamos National Laboratory (LANL). The released versions of RETRAN-02 are not operational on CRAY computer systems which are the primary mainframes in use at LANL requiring that the code be converted to the CRAY system. This document describes the code conversion, installation, and validation of the RETRAN-02/MOD004 code on the LANL CRAY computer system

  16. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  17. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  18. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  19. Client-server computer architecture saves costs and eliminates bottlenecks

    International Nuclear Information System (INIS)

    Darukhanavala, P.P.; Davidson, M.C.; Tyler, T.N.; Blaskovich, F.T.; Smith, C.

    1992-01-01

    This paper reports that workstation, client-server architecture saved costs and eliminated bottlenecks that BP Exploration (Alaska) Inc. experienced with mainframe computer systems. In 1991, BP embarked on an ambitious project to change technical computing for its Prudhoe Bay, Endicott, and Kuparuk operations on Alaska's North Slope. This project promised substantial rewards, but also involved considerable risk. The project plan called for reservoir simulations (which historically had run on a Cray Research Inc. X-MP supercomputer in the company's Houston data center) to be run on small computer workstations. Additionally, large Prudhoe Bay, Endicott, and Kuparuk production and reservoir engineering data bases and related applications also would be moved to workstations, replacing a Digital Equipment Corp. VAX cluster in Anchorage

  20. A personal-computer-based package for interactive assessment of magnetohydrodynamic equilibrium and poloidal field coil design in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Kelleher, W.P.; Steiner, D.

    1989-01-01

    A personal-computer (PC)-based calculational approach assesses magnetohydrodynamic (MHD) equilibrium and poloidal field (PF) coil arrangement in a highly interactive mode, well suited for tokamak scoping studies. The system developed involves a two-step process: the MHD equilibrium is calculated and then a PF coil arrangement, consistent with the equilibrium is determined in an interactive design environment. In this paper the approach is used to examine four distinctly different toroidal configurations: the STARFIRE rector, a spherical torus (ST), the Big Dee, and an elongated tokamak. In these applications the PC-based results are benchmarked against those of a mainframe code for STARFIRE, ST, and Big Dee. The equilibrium and PF coil arrangement calculations obtained with the PC approach agree within a few percent with those obtained with the mainframe code

  1. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  2. Monte Carlo calculations on a parallel computer using MORSE-C.G

    International Nuclear Information System (INIS)

    Wood, J.

    1995-01-01

    The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)

  3. USSR orders computers to improve nuclear safety

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Control Data Corp (CDC) has received an order valued at $32-million from the Soviet Union for six Cyber 962 mainframe computer systems to be used to increase the safety of civilian nuclear powerplants. The firm is now waiting for approval of the contract by the US government and Western Allies. The computers, ordered by the Soviet Research and Development Institute of Power Engineering (RDIPE), will analyze safety factors in the operation of nuclear reactors over a wide range of conditions. The Soviet Union's civilian nuclear program is one of the largest in the world, with over 50 plants in operation. Types of safety analyses the computers perform include: neutron-physics calculations, radiation-protection studies, stress analysis, reliability analysis of equipment and systems, ecological-impact calculations, transient analysis, and support activities for emergency response. They also include a simulator with realistic mathematical models of Soviet nuclear powerplants to improve operator training

  4. Workstation computer systems for in-core fuel management

    International Nuclear Information System (INIS)

    Ciccone, L.; Casadei, A.L.

    1992-01-01

    The advancement of powerful engineering workstations has made it possible to have thermal-hydraulics and accident analysis computer programs operating efficiently with a significant performance/cost ratio compared to large mainframe computer. Today, nuclear utilities are acquiring independent engineering analysis capability for fuel management and safety analyses. Computer systems currently available to utility organizations vary widely thus requiring that this software be operational on a number of computer platforms. Recognizing these trends Westinghouse adopted a software development life cycle process for the software development activities which strictly controls the development, testing and qualification of design computer codes. In addition, software standards to ensure maximum portability were developed and implemented, including adherence to FORTRAN 77, and use of uniform system interface and auxiliary routines. A comprehensive test matrix was developed for each computer program to ensure that evolution of code versions preserves the licensing basis. In addition, the results of such test matrices establish the Quality Assurance basis and consistency for the same software operating on different computer platforms. (author). 4 figs

  5. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  6. Large-scale computation in solid state physics - Recent developments and prospects

    International Nuclear Information System (INIS)

    DeVreese, J.T.

    1985-01-01

    During the past few years an increasing interest in large-scale computation is developing. Several initiatives were taken to evaluate and exploit the potential of ''supercomputers'' like the CRAY-1 (or XMP) or the CYBER-205. In the U.S.A., there first appeared the Lax report in 1982 and subsequently (1984) the National Science Foundation in the U.S.A. announced a program to promote large-scale computation at the universities. Also, in Europe several CRAY- and CYBER-205 systems have been installed. Although the presently available mainframes are the result of a continuous growth in speed and memory, they might have induced a discontinuous transition in the evolution of the scientific method; between theory and experiment a third methodology, ''computational science'', has become or is becoming operational

  7. Integrated computer-aided design using minicomputers

    Science.gov (United States)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  8. Networking the Home and University: How Families Can Be Integrated into Proximate/Distant Computer Systems.

    Science.gov (United States)

    Watson, J. Allen; And Others

    1989-01-01

    Describes study that was conducted to determine the feasibility of networking home microcomputers with a university mainframe system in order to investigate a new family process research paradigm, as well as the design and function of the microcomputer/mainframe system. Test instrumentation is described and systems' reliability and validity are…

  9. Yankee links computing needs, increases productivity

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    Yankee Atomic Electric Company provides design and consultation services to electric utility companies that operate nuclear power plants. This means bringing together the skills and talents of more than 500 people in many disciplines, including computer-aided design, human resources, financial services, and nuclear engineering. The company was facing a problem familiar to many companies in the nuclear industry.Key corporate data and applications resided on UNIX or other types of computer systems, but most users at Yankee had personal computers on their desks. How could Yankee enable the PC users to share the data, applications, and resources of the larger computing environment such as UNIX, while ensuring they could still use their favorite PC applications? The solution was PC-NFS from Sunsoft, of Chelmsford, Mass., which links PCs to UNIX and other systems. The Yankee computing story is an example of computer downsizing-the trend of moving away from mainframe computers in favor of lower-cost, more flexible client/server computing. Today, Yankee Atomic has more than 350 PCs on desktops throughout the company, using PC-NFS, which enables them t;o use the data, applications, disks, and printers of the FUNIX server systems. This new client/server environment has reduced Yankee's computing costs while increasing its computing power and its ability to respond to customers

  10. Integrated minicomputer alpha analysis system

    International Nuclear Information System (INIS)

    Vasilik, D.G.; Coy, D.E.; Seamons, M.; Henderson, R.W.; Romero, L.L.; Thomson, D.A.

    1978-01-01

    Approximately 1,000 stack and occupation air samples from plutonium and uranium facilities at LASL are analyzed daily. The concentrations of radio-nuclides in air are determined by measuring absolute alpha activities of particulates collected on air sample filter media. The Integrated Minicomputer Pulse system (IMPULSE) is an interface between many detectors of extremely simple design and a Digital Equipment Corporation (DEC) PDP-11/04 minicomputer. The detectors are photomultiplier tubes faced with zinc sulfide (ZnS). The average detector background is approximately 0.07 cpm. The IMPULSE system includes two mainframes, each of which can hold up to 64 detectors. The current hardware configuration includes 64 detectors in one mainframe and 40 detectors in the other. Each mainframe contains a minicomputer with 28K words of Random Access Memory. One minicomputer controls the detectors in both mainframes. A second computer was added for fail-safe redundancy and to support other laboratory computer requirements. The main minicomputer includes a dual floppy disk system and a dual DEC 'RK05' disk system for mass storage. The RK05 facilitates report generation and trend analysis. The IMPULSE hardware provides for passage of data from the detectors to the computer, and for passage of status and control information from the computer to the detector stations

  11. THE DYNAMICS OF A DISTRIBUTION SYSTEM SIMULATED ON A SPREADSHEET

    Directory of Open Access Journals (Sweden)

    R. Reinecke

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The dynamics of a typical production-distribution system, namely from manufacturer to distributors to retailers has been simulated with the aid of Lotus 123 on a personal computer. The original simulation program DYNAr10 was run on an IBM 1620 mainframe computer but we successfully converted it to run on a personal computer using LOTUS 123.
    This paper deals with problems encountered in using the present MS-DOS limited PC machines to run application programmes written for earlier mainframe machines. It is also shown that results very comparable with those obtained on mainframe machines can be generated on a simple PC.

    AFRIKAANSE OPSOMMING: Hierdie referaat beskryf die ervaring van magisterstudente met die omskakeling van die simulasieprogram DYNAMO vir die ondersoek van die dinamika van industriele stelsels van hoofraamrekenaar na 'n persoonlike rekenaar.

  12. PC as physics computer for LHC?

    CERN Document Server

    Jarp, S; Simmins, A; Yaari, R; Jarp, Sverre; Tang, Hong; Simmins, Antony; Yaari, Refael

    1995-01-01

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation fa...

  13. An Evaluation of the Availability and Application of Microcomputer Software Programs for Use in Air Force Ground Transportation Squadrons

    Science.gov (United States)

    1988-09-01

    software programs capable of being used on a microcomputer will be considered for analysis. No software intended for use on a miniframe or mainframe...Dial-A-Log consists of a program written in a computer language called L-10 that is run on a DEC-20 miniframe . The combination of the specific...proliferation of software dealing with microcomputers. Instead, they were geared more towards managing the use of miniframe or mainframe computer

  14. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    Energy Technology Data Exchange (ETDEWEB)

    Imam, M [National center for nuclear safety and radiation control, atomic energy authority, Cairo, (Egypt)

    1995-10-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig.

  15. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    International Nuclear Information System (INIS)

    Imam, M.

    1995-01-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig

  16. Printing in heterogeneous computer environment at DESY

    International Nuclear Information System (INIS)

    Jakubowski, Z.

    1996-01-01

    The number of registered hosts DESY reaches 3500 while the number of print queues approaches 150. The spectrum of used computing environment is very wide: from MAC's and PC's, through SUN, DEC and SGI machines to the IBM mainframe. In 1994 we used 18 tons of paper. We present a solution for providing print services in such an environment for more than 3500 registered users. The availability of the print service is a serious issue. Using centralized printing has a lot of advantages for software administration but creates single point of failure. We solved this problem partially without using expensive software and hardware. The talk provides information about the DESY central central print spooler concept. None of the systems available on the market provides ready to use reliable solution for all platforms used for DESY. We discuss concepts for installation, administration and monitoring large number of printers. We found a solution for printing both on central computing facilities likewise for support of stand-alone workstations. (author)

  17. ARDS User Manual

    Science.gov (United States)

    Fleming, David P.

    2001-01-01

    Personal computers (PCs) are now used extensively for engineering analysis. their capability exceeds that of mainframe computers of only a few years ago. Programs originally written for mainframes have been ported to PCs to make their use easier. One of these programs is ARDS (Analysis of Rotor Dynamic Systems) which was developed at Arizona State University (ASU) by Nelson et al. to quickly and accurately analyze rotor steady state and transient response using the method of component mode synthesis. The original ARDS program was ported to the PC in 1995. Several extensions were made at ASU to increase the capability of mainframe ARDS. These extensions have also been incorporated into the PC version of ARDS. Each mainframe extension had its own user manual generally covering only that extension. Thus to exploit the full capability of ARDS required a large set of user manuals. Moreover, necessary changes and enhancements for PC ARDS were undocumented. The present document is intended to remedy those problems by combining all pertinent information needed for the use of PC ARDS into one volume.

  18. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  19. Nuclear power plant simulation using advanced simulation codes through a state-of-the-art workstation

    International Nuclear Information System (INIS)

    Laats, E.T.; Hagen, R.N.

    1985-01-01

    The Nuclear Plant Analyzer (NPA) currently resides in a Control Data Corporation 176 mainframe computer at the Idaho National Engineering Laboratory (INEL). The NPA user community is expanding to include worldwide users who cannot consistently access the INEL mainframe computer from their own facilities. Thus, an alternate mechanism is needed to enable their use of the NPA. Therefore, a feasibility study was undertaken by EG and G Idaho to evaluate the possibility of developing a standalone workstation dedicated to the NPA

  20. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  1. Development of Calcomp compatible interface library 'piflib' on X Window System

    International Nuclear Information System (INIS)

    Tanabe, Hidenobu; Yokokawa, Mitsuo; Onuma, Yoshio.

    1993-05-01

    Graphics processing at JAERI mainly has been executed on mainframe computers with Calcomp compatible graphics libraries. With spread of engineering workstations (EWS), it is important that those enormous graphics softwares be able to be carried out on EWS. The Calcomp compatible interface library 'piflib' has been developed on the X Window System, which is the most popular window environments on EWS. In this report, specifications of the library 'piflib' and its usages are presented. The cooperative processing with mainframe computers is also described. (author)

  2. PC as physics computer for LHC?

    International Nuclear Information System (INIS)

    Jarp, Sverre; Simmins, Antony; Tang, Hong

    1996-01-01

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing to existing RISC workstation in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments. (author)

  3. Pc as Physics Computer for Lhc ?

    Science.gov (United States)

    Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.

  4. Downsizing information systems : framing the issues for the Office of Naval Intelligence (ONI)

    OpenAIRE

    Hutson, Peter M.

    1994-01-01

    Downsizing information systems from large and centralized mainframe computing architectures to smaller and distributed desktop systems is one of the most difficult and critical strategic decisions facing both corporate and government organizations. Vendor advertisements and media hype often boast of huge cost savings and greater flexibility while retaining mainframe-strength performance. Cryptic terminology, biased vendor assistance, and rapidly changing technology complicate already difficul...

  5. Wireless Sensor Networks in Motion - Clustering Algorithms for Service Discovery and Provisioning

    NARCIS (Netherlands)

    Marin Perianu, Raluca

    2008-01-01

    The evolution of computer technology follows a trajectory of miniaturization and diversification. The technology has developed from mainframes (large computers used by many people) to personal computers (one computer per person) and recently, embedded computers (many computers per person). One of

  6. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  7. PATFIT-88

    International Nuclear Information System (INIS)

    Kirkegaard, P.; Pedersen, N.J.; Eldrup, M.

    1989-02-01

    A data processing system has been developed for analyzing positron annihilation lifetime and angular correlation spectra on mainframe and Personal Computers (PCs). The system is based on the PATFIT programs previously developed for use on mainframe computers. It consists of the three fitting programs POSITRONFIT, RESOLUTION and ACARFIT and three associated programs for easy editing of the input data to the fitting programs, as well as a graphics program for the display of measured and fitted spectra. They can be used directly on any IBM-compatible PC. The PATFIT-88 software is available from Risoe National Laboratory. (author) 5 ills., 46 refs

  8. Distributed computing for FTU data handling

    Energy Technology Data Exchange (ETDEWEB)

    Bertocchi, A. E-mail: bertocchi@frascati.enea.it; Bracco, G.; Buceti, G.; Centioli, C.; Giovannozzi, E.; Iannone, F.; Panella, M.; Vitale, V

    2002-06-01

    The growth of data warehouse in tokamak experiment is leading fusion laboratories to provide new IT solutions in data handling. In the last three years, the Frascati Tokamak Upgrade (FTU) experimental database was migrated from IBM-mainframe to Unix distributed computing environment. The migration efforts have taken into account the following items: (1) a new data storage solution based on storage area network over fibre channel; (2) andrew file system (AFS) for wide area network file sharing; (3) 'one measure/one file' philosophy replacing 'one shot/one file' to provide a faster read/write data access; (4) more powerful services, such as AFS, CORBA and MDSplus to allow users to access FTU database from different clients, regardless their O.S.; (5) large availability of data analysis tools, from the locally developed utility SHOW to the multi-platform Matlab, interactive data language and jScope (all these tools are now able to access also the Joint European Torus data, in the framework of the remote data access activity); (6) a batch-computing cluster of Alpha/CompaqTru64 CPU based on CODINE/GRD to optimize utilization of software and hardware resources.

  9. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  10. Image processing in offshore engineering

    International Nuclear Information System (INIS)

    Rodriguez, M.V.R.; A. Oliveira, M. de; Almeida, M.E.T. de; Lorenzoni, C.; Ferrante, A.J.

    1995-01-01

    The technological progress which has taken place during the last decade introduced a totally new outlook regarding the professional computational environment in general, and regarding the engineering profession in particular. During many years engineering computing was performed based on large computer centers, getting bigger and bigger all the time, going from mainframes to super computers, essentially producing numerical results on paper media. Lately, however, it has been realized that a much more productive computational environment can be implemented using an open architecture of client/server type, based on smaller lower cost equipment including workstations and PC's, and considering engineering information in a broader sense. This papers reports briefly the experience of the Production Department of Petrobras in transforming its centralized, mainframe based, computational environment into a open distributed client/server computational environment, focusing on the problem of handling technical graphics information regarding its more than 70 fixed offshore platforms

  11. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  12. Computation of single- and two-phase heat transfer rates suitable for water-cooled tubes and subchannels

    International Nuclear Information System (INIS)

    Groeneveld, D.C.; Leung, L.K.H.; Cheng, S.C.; Nguyen, C.

    1989-01-01

    A computational method for predicting heat transfer, valid for a wide range of flow conditions (from pool boiling and laminar flow conditions to highly turbulent flow), has been developed. It correctly identifies the heat transfer modes and predicts the heat transfer rates as well as transition points (such as the critical heat flux point) on the boiling curve. The computational heat transfer method consists of a combination of carefully chosen heat transfer equations for each heat transfer mode. Each of these equations has been selected because of their accuracy, wide range of application, and correct asymptotic trends. Using a mechanistically-based heat transfer logic, these equations have been combined in a convenient software package suitable for PC or mainframe application. The computational method has been thoroughly tested against many sets of experimental data. The parametric and asymptotic trends of the prediction method have been examined in detail. Correction factors are proposed for extending the use of individual predictive techniques to various geometric configurations and upstream conditions. (orig.)

  13. A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test

    Science.gov (United States)

    Hamley, John A.

    1989-01-01

    A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.

  14. Transferring data oscilloscope to an IBM using an Apple II+

    Science.gov (United States)

    Miller, D. L.; Frenklach, M. Y.; Laughlin, P. J.; Clary, D. W.

    1984-01-01

    A set of PASCAL programs permitting the use of a laboratory microcomputer to facilitate and control the transfer of data from a digital oscilloscope (used with photomultipliers in experiments on soot formation in hydrocarbon combustion) to a mainframe computer and the subsequent mainframe processing of these data is presented. Advantages of this approach include the possibility of on-line computations, transmission flexibility, automatic transfer and selection, increased capacity and analysis options (such as smoothing, averaging, Fourier transformation, and high-quality plotting), and more rapid availability of results. The hardware and software are briefly characterized, the programs are discussed, and printouts of the listings are provided.

  15. Cost-effective use of minicomputers to solve structural problems

    Science.gov (United States)

    Storaasli, O. O.; Foster, E. P.

    1978-01-01

    Minicomputers are receiving increased use throughout the aerospace industry. Until recently, their use focused primarily on process control and numerically controlled tooling applications, while their exposure to and the opportunity for structural calculations has been limited. With the increased availability of this computer hardware, the question arises as to the feasibility and practicality of carrying out comprehensive structural analysis on a minicomputer. This paper presents results on the potential for using minicomputers for structural analysis by (1) selecting a comprehensive, finite-element structural analysis system in use on large mainframe computers; (2) implementing the system on a minicomputer; and (3) comparing the performance of the minicomputers with that of a large mainframe computer for the solution to a wide range of finite element structural analysis problems.

  16. Report of the Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel

    International Nuclear Information System (INIS)

    1984-09-01

    The Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel (HEPAP) was formed in July 1984 to make recommendations concerning the need for state-of-the-art computing for theoretical studies. The specific Charge to the Subpanel is attached as Appendix A, and the full membership is listed in Appendix B. For the purposes of this study, theoretical computing was interpreted as encompassing both investigations in the theory of elementary particles and computation-intensive aspects of accelerator theory and design. Many problems in both areas are suited to realize the advantages of vectorized processing. The body of the Subpanel Report is organized as follows. The Introduction, Section I, explains some of the goals of computational physics as it applies to elementary particle theory and accelerator design. Section II reviews the availability of mainframe supercomputers to researchers in the United States, in Western Europe, and in Japan. Other promising approaches to large-scale computing are summarized in Section III. Section IV details the current computing needs for problems in high energy theory, and for beam dynamics studies. The Subpanel Recommendations appear in Section V. The Appendices attached to this Report give the Charge to the Subpanel, the Subpanel membership, and some background information on the financial implications of establishing a supercomputer center

  17. Computer land management : New programs and systems wind along revolutionary roads

    International Nuclear Information System (INIS)

    Marsters, S.

    1998-01-01

    New advances in computer software programs and systems that are used to prepare maps that display detailed up-to-date lease and drilling activities in Western Canada were discussed. Petroleum Information/Dwights Canada Ltd. has changed its land database from a mainframe-based system into an Oracle database. The conversion allows the company to offer a more comprehensive storage medium, a more flexible delivery system, and more complete data. PI/Dwights supplies land data to software and mapping vendors such as geoLOGIC Systems Ltd. and AccuMap Enerdata Ltd. The company has also developed a CD-ROM-based electronic atlas which combines land data with pipelines and facilities, unit boundaries and well locations. The open system has the ability to integrate or import data sets. 2 figs

  18. Using a Cray Y-MP as an array processor for a RISC Workstation

    Science.gov (United States)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  19. 3081/E processor and its on-line use

    International Nuclear Information System (INIS)

    Rankin, P.; Bricaud, B.; Gravina, M.

    1985-05-01

    The 3081/E is a second generation emulator of a mainframe IBM. One of it's applications will be to form part of the data acquisition system of the upgraded Mark II detector for data taking at the SLAC linear collider. Since the processor does not have direct connections to I/O devices a FASTBUS interface will be provided to allow communication with both SLAC Scanner Processors (which are responsible for the accumulation of data at a crate level) and the experiment's VAX 8600 mainframe. The 3081/E's will supply a significant amount of on-line computing power to the experiment (a single 3081/E is equivalent to 4 to 5 VAX 11/780's). A major advantage of the 3081/E is that program development can be done on an IBM mainframe (such as the one used for off-line analysis) which gives the programmer access to a full range of debugging tools. The processor's performance can be continually monitored by comparison of the results obtained using it to those given when the same program is run on an IBM computer. 9 refs

  20. Enterprise Computing

    OpenAIRE

    Spruth, Wilhelm G.

    2013-01-01

    Das vorliegende Buch entstand aus einer zweisemestrigen Vorlesung „Enterprise Computing“, die wir gemeinsam über viele Jahre als Teil des Bachelor- oder Master-Studienganges an der Universität Leipzig gehalten haben. Das Buch führt ein in die Welt des Mainframe und soll dem Leser einen einführenden Überblick geben. Band 1 ist der Einführung in z/OS gewidmet, während sich Band 2 mit der Internet Integration beschäftigt. Ergänzend werden in Band 3 praktische Übungen unter z/OS dargestellt....

  1. Some tools of the trade we've developed for our cross-section calculations

    International Nuclear Information System (INIS)

    Gardner, D.G.; Gardner, M.A.

    1992-11-01

    A number of compute codes have been modified or developed, both main-frame and PC. Seven codes, of which three are discussed in some detail. The later are: a controller-driven, double-precision version of the coupled-channel code ECIS; the latest version of STAPRE, a precompound plus Hauser-Feshbach nuclear reaction code; and NUSTART, a PC code that analyzes large sets of discrete nuclear levels and the multipole transitions among them. All main-frame codes are now being converted to the UNICOS operating system

  2. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  3. Bigraphs

    DEFF Research Database (Denmark)

    Elsborg, Ebbe

    We study how bigraphical reactive systems may be used for modelling and simulating — in a manner controlled by sorts and types — global ubiquitous computing. Ubiquitous computing was in the early 1990s envisioned by Mark Weiser to be the third wave of computing (after mainframes, and then personal...

  4. New data storage and retrieval systems for JET data

    Energy Technology Data Exchange (ETDEWEB)

    Layne, Richard E-mail: richard.layne@ukaea.org.uk; Wheatley, Martin E-mail: martin.wheatley@ukaea.org.uk

    2002-06-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined.

  5. New data storage and retrieval systems for JET data

    International Nuclear Information System (INIS)

    Layne, Richard; Wheatley, Martin

    2002-01-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined

  6. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  7. Organization of the M-6000 computer calculating process in the CAMAC on-line measurement systems for a physical experiment

    International Nuclear Information System (INIS)

    Bespalova, T.V.; Volkov, A.S.; Golutvin, I.A.; Maslov, V.V.; Nevskaya, N.A.; Okonishnikov, A.A.; Terekhov, V.E.; Shilkin, I.P.

    1977-01-01

    Discussed are the basic results of the work on designing the software of the computer measuring complex (CMC) which uses the M-6000 computer and operates on line with an accelerator. All the CMC units comply with the CAMAC standard. The CMC incorporates a mainframe memory, twenty-four kilobytes of 16-digit words in size, and external memory on magnetic disks, 1 megabyte in size. Suggested is a modification of the technique for designing the CMC software providing for program complexes which are dynamically adjusted by an experimentalist for the given experiment for a short time. The CMC software comprises the following major portions: a software generator, data acquisition program, on-line data processing routines, off-line data processing programs and programs for data recording on magnetic tapes and disks. Testing of the designed CMC has revealed that the total data processing time equals to from 150 to 500 ms

  8. Use of personal computers in performing a linear modal analysis of a large finite-element model

    International Nuclear Information System (INIS)

    Wagenblast, G.R.

    1991-01-01

    This paper presents the use of personal computers in performing a dynamic frequency analysis of a large (2,801 degrees of freedom) finite-element model. Large model linear time history dynamic evaluations of safety related structures were previously restricted to mainframe computers using direct integration analysis methods. This restriction was a result of the limited memory and speed of personal computers. With the advances in memory capacity and speed of the personal computers, large finite-element problems now can be solved in the office in a timely and cost effective manner. Presented in three sections, this paper describes the procedure used to perform the dynamic frequency analysis of the large (2,801 degrees of freedom) finite-element model on a personal computer. Section 2.0 describes the structure and the finite-element model that was developed to represent the structure for use in the dynamic evaluation. Section 3.0 addresses the hardware and software used to perform the evaluation and the optimization of the hardware and software operating configuration to minimize the time required to perform the analysis. Section 4.0 explains the analysis techniques used to reduce the problem to a size compatible with the hardware and software memory capacity and configuration

  9. Real-time distributed simulation using the Modular Modeling System interfaced to a Bailey NETWORK 90 system

    International Nuclear Information System (INIS)

    Edwards, R.M.; Turso, J.A.; Garcia, H.E.; Ghie, M.H.; Dharap, S.; Lee, S.

    1991-01-01

    The Modular Modeling System was adapted for real-time simulation testing of diagnostic expert systems in 1987. The early approach utilized an available general purpose mainframe computer which operated the simulation and diagnostic program in the multitasking environment of the mainframe. That research program was subsequently expanded to intelligent distributed control applications incorporating microprocessor based controllers with the aid of an equipment grant from the National Science Foundation (NSF). The Bailey NETWORK 90 microprocessor-based control system, acquired with the NSF grant, has been operational since April of 1990 and has been interfaced to both VAX mainframe and PC simulations of power plant processes in order to test and demonstrate advanced control and diagnostic concepts. This paper discusses the variety of techniques that have been used and which are under development to interface simulations and other distributed control functions to the Penn State Bailey system

  10. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  11. Implementing and testing program PLOTTAB

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1988-01-01

    Enclosed is a description of the magnetic tape or floppy diskette containing the PLOTTAB code package. In addition detailed information is provided on implementation and testing of this code. See part I for mainframe computers; part II for personal computers. These codes are documented in IAEA-NDS-82. (author)

  12. Computerized management of radiology department: Installation and use of local area network(LAN) by personal computers

    International Nuclear Information System (INIS)

    Lee, Young Joon; Han, Kook Sang; Geon, Do Ig; Sol, Chang Hyo; Kim, Byung Soo

    1993-01-01

    There is increasing need for network connecting personal computers(PC) together. Thus local area network(LAN) emerged, which was designed to allow multiple computers to access and share multiple files and programs and expensive peripheral devices and to communicate with each user. We build PC-LAN in our department that consisted of 1) hardware-9 sets of personal computers(IBM compatible 80386 DX, 1 set; 80286 AT, 8 sets) and cables and network interface cards (Ethernet compatible, 16 bits) that connected PC and peripheral devices 2) software - network operating system and database management system. We managed this network for 6 months. The benefits of PC-LAN were 1) multiuser (share multiple files and programs, peripheral devices) 2) real data processing 3) excellent expandability and flexibility, compatibility, easy connectivity 4) single cable for networking) rapid data transmission 5) simple and easy installation and management 6) using conventional PC's software running under DOS(Disk Operating System) without transformation 7) low networking cost. In conclusion, PC-lan provides an easier and more effective way to manage multiuser database system needed at hospital departments instead of more expensive and complex network of minicomputer or mainframe

  13. Comparison of capability between two versions of reactor transient diagnosis expert system 'DISKET' programmed in different languages

    International Nuclear Information System (INIS)

    Yokobayashi, Masao; Yoshida, Kazuo

    1991-01-01

    An expert system DISKET has been developed at JAERI to apply knowledge engineering techniques to the transient diagnosis of nuclear power plant. The first version of DISKET programmed in UTILISP has been developed with the main-frame computer FACOM M-780 at JAERI. The LISP language is not suitable for on-line diagnostic systems because it is highly dependent on computer to be used and requires a large computer memory. The large mainframe computer is also not suitable because there are various restrictions as a multi-user computer system. The second version of DISKET for a practical use has been developed in FORTRAN to realize on-line real time diagnoses with limited computer resources. These two versions of DISKET with the same knowledge base have been compared in running capability, and it has been found that the LISP version of DISKET needs more than two times of memory and CPU time of FORTRAN version. From this result, it is shown that this approach is a practical one to develop expert systems for on-line real time diagnosis of transients with limited computer resources. (author)

  14. Incremental ALARA cost/benefit computer analysis

    International Nuclear Information System (INIS)

    Hamby, P.

    1987-01-01

    Commonwealth Edison Company has developed and is testing an enhanced Fortran Computer Program to be used for cost/benefit analysis of Radiation Reduction Projects at its six nuclear power facilities and Corporate Technical Support Groups. This paper describes a Macro-Diven IBM Mainframe Program comprised of two different types of analyses-an Abbreviated Program with fixed costs and base values, and an extended Engineering Version for a detailed, more through and time-consuming approach. The extended engineering version breaks radiation exposure costs down into two components-Health-Related Costs and Replacement Labor Costs. According to user input, the program automatically adjust these two cost components and applies the derivation to company economic analyses such as replacement power costs, carrying charges, debt interest, and capital investment cost. The results from one of more program runs using different parameters may be compared in order to determine the most appropriate ALARA dose reduction technique. Benefits of this particular cost / benefit analysis technique includes flexibility to accommodate a wide range of user data and pre-job preparation, as well as the use of proven and standardized company economic equations

  15. A nation evolves | IDRC - International Development Research Centre

    International Development Research Centre (IDRC) Digital Library (Canada)

    2011-01-28

    Jan 28, 2011 ... ... information and communication technologies (ICTs) have changed the lives of ... children playing, and men and women trading information on the ... a few computers hooked up to the Internet and connected to a mainframe.

  16. Don't Gamble with Y2K Compliance.

    Science.gov (United States)

    Sturgeon, Julie

    1999-01-01

    Examines one school district's (Clark County, Nevada) response to the Y2K computer problem and provides tips on time-saving Y2K preventive measures other school districts can use. Explains how the district de-bugged its computer system including mainframe considerations and client-server applications. Highlights office equipment and teaching…

  17. How to Get from Cupertino to Boca Raton.

    Science.gov (United States)

    Troxel, Duane K.; Chiavacci, Jim

    1985-01-01

    Describes seven methods to transfer data from Apple computer disks to IBM computer disks and vice versa: print out data and retype; use a commercial software package, optical-character reader, homemade cable, or modem to pass or transfer data directly; pay commercial data-transfer service; or store files on mainframe and download. (MBR)

  18. Automated accounting systems for nuclear materials

    International Nuclear Information System (INIS)

    Erkkila, B.

    1994-01-01

    History of the development of nuclear materials accounting systems in USA and their purposes are considered. Many present accounting systems are based on mainframe computers with multiple terminal access. Problems of future improvement accounting systems are discussed

  19. A PC [personal computer]-based version of KENO V.a

    International Nuclear Information System (INIS)

    Nigg, D.A.; Atkinson, C.A.; Briggs, J.B.; Taylor, J.T.

    1990-01-01

    The use of personal computers (PCs) and engineering workstations for complex scientific computations has expanded rapidly in the last few years. This trend is expected to continue in the future with the introduction of increasingly sophisticated microprocessors and microcomputer systems. For a number of reasons, including security, economy, user convenience, and productivity, an integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past 3 yr. Nuclear cross-section data and resonance parameters are preprocessed from the Evaluated Nuclear Data Files Version 5 (ENDF/B-V) and supplied in a form suitable for use in a PC-based spectrum calculation and multigroup cross-section generation module. This module produces application-specific data libraries that can then be used in various neutron transport and diffusion theory code modules. This paper discusses several details of the Monte Carlo criticality module, which is based on the well-known highly-sophisticated KENO V.a package developed at Oak Ridge National Laboratory and previously released in mainframe form by the Radiation Shielding Information Center (RSIC). The conversion process and a variety of benchmarking results are described

  20. The Value Proposition in Institutional Repositories

    Science.gov (United States)

    Blythe, Erv; Chachra, Vinod

    2005-01-01

    In the education and research arena of the late 1970s and early 1980s, a struggle developed between those who advocated centralized, mainframe-based computing and those who advocated distributed computing. Ultimately, the debate reduced to whether economies of scale or economies of scope are more important to the effectiveness and efficiency of…

  1. Amdahl 470 Chip Package

    CERN Multimedia

    1975-01-01

    In the late 70s the larger IBM computers were water cooled. Amdahl, an IBM competitor, invented an air cooling technology for it's computers. His company worked hard, developing a computer that was faster and less expensive than the IBM System/360 mainframe computer systems. This object contains an actual Amdahl series 470 computer logic chip with an air cooling device mounted on top. The package leads and cooling tower are gold-plated.

  2. Computer network that assists in the planning, execution and evaluation of in-reactor experiments

    International Nuclear Information System (INIS)

    Bauer, T.H.; Froehle, P.H.; August, C.; Baldwin, R.D.; Johanson, E.W.; Kraimer, M.R.; Simms, R.; Klickman, A.E.

    1985-01-01

    For over 20 years complex, in-reactor experiments have been performed at Argonne National Laboratory (ANL) to investigate the performance of nuclear reactor fuel and to support the development of large computer codes that address questions of reactor safety in full-scale plants. Not only are computer codes an important end-product of the research, but computer analysis is also involved intimately at most stages of experiment planning, data reduction, and evaluation. For instance, many experiments are of sufficiently long duration or, if they are of brief duration, occur in such a purposeful sequence that need for speedy availability of on-line data is paramount. This is made possible most efficiently by computer assisted displays and evaluation. A purposeful linking of main-frame, mini, and micro computers has been effected over the past eight years which greatly enhances the speed with which experimental data are reduced to useful forms and applied to the relevant technological issues. This greater efficiency in data management led also to improvements in the planning and execution of subsequent experiments. Raw data from experiments performed at INEL is stored directly on disk and tape with the aid of minicomputers. Either during or shortly after an experiment, data may be transferred, via a direct link, to the Illinois offices of ANL where the data base is stored on a minicomputer system. This Idaho-to-Illinois link has both enhanced experiment performance and allowed rapid dissemination of results

  3. Air traffic control : good progress on interim replacement for outage-plagued system, but risks can be further reduced

    Science.gov (United States)

    1996-10-01

    Certain air traffic control(ATC) centers experienced a series of major outages, : some of which were caused by the Display Channel Complex or DCC-a mainframe : computer system that processes radar and other data into displayable images on : controlle...

  4. Computer analyses for the design, operation and safety of new isotope production reactors: A technology status review

    International Nuclear Information System (INIS)

    Wulff, W.

    1990-01-01

    A review is presented on the currently available technologies for nuclear reactor analyses by computer. The important distinction is made between traditional computer calculation and advanced computer simulation. Simulation needs are defined to support the design, operation, maintenance and safety of isotope production reactors. Existing methods of computer analyses are categorized in accordance with the type of computer involved in their execution: micro, mini, mainframe and supercomputers. Both general and special-purpose computers are discussed. Major computer codes are described, with regard for their use in analyzing isotope production reactors. It has been determined in this review that conventional systems codes (TRAC, RELAP5, RETRAN, etc.) cannot meet four essential conditions for viable reactor simulation: simulation fidelity, on-line interactive operation with convenient graphics, high simulation speed, and at low cost. These conditions can be met by special-purpose computers (such as the AD100 of ADI), which are specifically designed for high-speed simulation of complex systems. The greatest shortcoming of existing systems codes (TRAC, RELAP5) is their mismatch between very high computational efforts and low simulation fidelity. The drift flux formulation (HIPA) is the viable alternative to the complicated two-fluid model. No existing computer code has the capability of accommodating all important processes in the core geometry of isotope production reactors. Experiments are needed (heat transfer measurements) to provide necessary correlations. It is important for the nuclear community, both in government, industry and universities, to begin to take advantage of modern simulation technologies and equipment. 41 refs

  5. An Introduction To PC-TRIM.

    Science.gov (United States)

    John R. Mills

    1989-01-01

    The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...

  6. The Computerized Reference Department: Buying the Future.

    Science.gov (United States)

    Kriz, Harry M.; Kok, Victoria T.

    1985-01-01

    Basis for systematic computerization of academic research library's reference, collection development, and collection management functions emphasizes productivity enhancement for librarians and support staff. Use of microcomputer and university's mainframe computer to develop applications of database management systems, electronic spreadsheets,…

  7. Colleges' Effort To Prepare for Y2K May Yield Benefits for Many Years.

    Science.gov (United States)

    Olsen, Florence

    2000-01-01

    Suggests that the money spent ($100 billion) to fix the Y2K bug in the United States resulted in improved campus computer systems. Reports from campuses around the country indicate that both mainframe and desktop systems experienced fewer problems than expected. (DB)

  8. IBM 3705 Communications Controller

    CERN Multimedia

    1972-01-01

    The IBM 3705 Communications Controller is a simple computer which attaches to an IBM System/360 or System/370. Its purpose is to connect communication lines to the mainframe channel. It was a first communications controller of the popular IBM 37xx series.

  9. UNIX at high energy physics Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, Alan

    1994-03-15

    With more and more high energy physics Laboratories ''downsizing'' from large central proprietary mainframe computers towards distributed networks, usually involving UNIX operating systems, the need was expressed at the 1991 Computers in HEP (CHEP) Conference to create a group to consider the implications of this trend and perhaps work towards some common solutions to ease the transition for HEP users worldwide.

  10. Real-time data system: Incorporating new technology in mission critical environments

    Science.gov (United States)

    Muratore, John F.; Heindel, Troy A.

    1990-01-01

    If the Space Station Freedom is to remain viable over its 30-year life span, it must be able to incorporate new information systems technologies. These technologies are necessary to enhance mission effectiveness and to enable new NASA missions, such as supporting the Lunar-Mars Initiative. Hi-definition television (HDTV), neural nets, model-based reasoning, advanced languages, CPU designs, and computer networking standards are areas which have been forecasted to make major strides in the next 30 years. A major challenge to NASA is to bring these technologies online without compromising mission safety. In past programs, NASA managers have been understandably reluctant to rely on new technologies for mission critical activities until they are proven in noncritical areas. NASA must develop strategies to allow inflight confidence building and migration of technologies into the trusted tool base. NASA has successfully met this challenge and developed a winning strategy in the Space Shuttle Mission Control Center. This facility, which is clearly among NASA's most critical, is based on 1970's mainframe architecture. Changes to the mainframe are very expensive due to the extensive testing required to prove that changes do not have unanticipated impact on critical processes. Systematic improvement efforts in this facility have been delayed due to this 'risk to change.' In the real-time data system (RTDS) we have introduced a network of engineering computer workstations which run in parallel to the mainframe system. These workstations are located next to flight controller operating positions in mission control and, in some cases, the display units are mounted in the traditional mainframe consoles. This system incorporates several major improvements over the mainframe consoles including automated fault detection by real-time expert systems and color graphic animated schematics of subsystems driven by real-time telemetry. The workstations have the capability of recording

  11. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  12. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  13. TOOLS FOR PRESENTING SPATIAL AND TEMPORAL PATTERNS OF ENVIRONMENTAL MONITORING DATA

    Science.gov (United States)

    The EPA Health Effects Research Laboratory has developed this data presentation tool for use with a variety of types of data which may contain spatial and temporal patterns of interest. he technology links mainframe computing power to the new generation of "desktop publishing" ha...

  14. Out-of-core nuclear fuel cycle optimization utilizing an engineering workstation

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Comes, S.A.

    1986-01-01

    Within the past several years, rapid advances in computer technology have resulted in substantial increases in their performance. The net effect is that problems that could previously only be executed on mainframe computers can now be executed on micro- and minicomputers. The authors are interested in developing an engineering workstation for nuclear fuel management applications. An engineering workstation is defined as a microcomputer with enhanced graphics and communication capabilities. Current fuel management applications range from using workstations as front-end/back-end processors for mainframe computers to completing fuel management scoping calculations. More recently, interest in using workstations for final in-core design calculations has appeared. The authors have used the VAX 11/750 minicomputer, which is not truly an engineering workstation but has comparable performance, to complete both in-core and out-of-core fuel management scoping studies. In this paper, the authors concentrate on our out-of-core research. While much previous work in this area has dealt with decisions concerned with equilibrium cycles, the current project addresses the more realistic situation of nonequilibrium cycles

  15. UNIX at high energy physics Laboratories

    International Nuclear Information System (INIS)

    Silverman, Alan

    1994-01-01

    With more and more high energy physics Laboratories ''downsizing'' from large central proprietary mainframe computers towards distributed networks, usually involving UNIX operating systems, the need was expressed at the 1991 Computers in HEP (CHEP) Conference to create a group to consider the implications of this trend and perhaps work towards some common solutions to ease the transition for HEP users worldwide

  16. A Study of Organizational Downsizing and Information Management Strategies.

    Science.gov (United States)

    1992-09-01

    Projected $100,000 at plants. per month savings from using miniframes . Networked Standardized "Best practices." 3 mainframes and applications. Whatever worked...The projected $1.2 million savings realized from going from mainframes to miniframes is to avoid having to reduce the budget by that amount in other...network hardware Turned in mainframe and replaced it with two miniframes ; networked new minis with systems at plants Networked mainframes and PCs Acquired

  17. Examples of data processing systems. Data processing system for JT-60

    International Nuclear Information System (INIS)

    Aoyagi, Tetsuo

    1996-01-01

    JT-60 data processing system is a large computer complex system including a lot of micro-computers, several mini-computers, and a main-frame computer. As general introduction of the original system configuration has been published previously, some improvements are described here. Transient mass data storage system, network database server, a data acquisition system using engineering workstations, and a graphic terminal emulator for X-Window are presented. These new features are realized by utilizing recent progress in computer and network technology and carefully designed user interface specification of the original system. (author)

  18. Fiber Optics and Library Technology.

    Science.gov (United States)

    Koenig, Michael

    1984-01-01

    This article examines fiber optic technology, explains some of the key terminology, and speculates about the way fiber optics will change our world. Applications of fiber optics to library systems in three major areas--linkage of a number of mainframe computers, local area networks, and main trunk communications--are highlighted. (EJS)

  19. FRAPCON-3: A computer code for the calculation of steady-state, thermal-mechanical behavior of oxide fuel rods for high burnup

    International Nuclear Information System (INIS)

    Berna, G.A.; Beyer, G.A.; Davis, K.L.; Lanning, D.D.

    1997-12-01

    FRAPCON-3 is a FORTRAN IV computer code that calculates the steady-state response of light water reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, and deformation of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (1) heat conduction through the fuel and cladding, (2) cladding elastic and plastic deformation, (3) fuel-cladding mechanical interaction, (4) fission gas release, (5) fuel rod internal gas pressure, (6) heat transfer between fuel and cladding, (7) cladding oxidation, and (8) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat-transfer correlations. The codes' integral predictions of mechanical behavior have not been assessed against a data base, e.g., cladding strain or failure data. Therefore, it is recommended that the code not be used for analyses of cladding stress or strain. FRAPCON-3 is programmed for use on both mainframe computers and UNIX-based workstations such as DEC 5000 or SUN Sparcstation 10. It is also programmed for personal computers with FORTRAN compiler software and at least 8 to 10 megabytes of random access memory (RAM). The FRAPCON-3 code is designed to generate initial conditions for transient fuel rod analysis by the FRAPTRAN computer code (formerly named FRAP-T6)

  20. Editorial

    Indian Academy of Sciences (India)

    2017-08-28

    Aug 28, 2017 ... 2016 Nobel Prize, is rather different – the seeds were sown in ..... campus, sometimes whizzing by on roller-blades! ..... run on mainframe computers that occupied a large room, and took .... Gold Clusters by Substrate Doping, J. Am. Chem. ... the Shape of Metal Ad-Particles by Doping the Oxide Support, ...

  1. Organization of the two-level memory in the image processing system on scanning measuring projectors

    International Nuclear Information System (INIS)

    Sychev, A.Yu.

    1977-01-01

    Discussed are the problems of improving the efficiency of the system for processing pictures taken in bubble chambers with the use of scanning measuring projectors. The system comprises 20 to 30 pro ectors linked with the ICL-1903A computer provided with a mainframe memory, 64 kilobytes in size. Because of the insufficient size of a mainframe memory, a part of the programs and data is located in a second-level memory, i.e. in an external memory. The analytical model described herein is used to analyze the effect of the memory organization on the characteristics of the system. It is shown that organization of pure procedures and introduction of the centralized control of the tWo-leVel memory result in substantial improvement of the efficiency of the picture processing system

  2. Conversion of a mainframe simulation for maintenance performance to a PC environment

    International Nuclear Information System (INIS)

    Gertman, D.I.

    1990-01-01

    The computer model MAPPS, the Maintenance Personnel Performance Simulation, has been developed and validated by the US NRC [Nuclear Regulatory Commission] in order to improve maintenance practices and procedures at nuclear power plants. This model has now been implemented and improved, in a PC [personal computer] environment and renamed MICROMAPPS. The model is stochastically based and users are able to simulate the performance of 2- to 8-person crews for a variety of maintenance tasks under a variety of conditions. These conditions include aspects of crew actions as potentially influenced by the task, the environment, or the personnel involved. For example, the influence of the following factors is currently modeled within the MAPPS computer code: (1) personnel characteristics include but are not limited to intellectual and perceptual motor ability levels, the effect of fatigue and conversely, of rest breaks on performance, stress, communication, supervisor acceptance, motivation, organizational climate, time since the tasks was last performed and the staffing level available; (2) task variables include but are not limited to time allowed, occurrence of shift change, intellectual requirements, perceptual motor requirements, procedures quality, necessity for protective clothing and essentiality of a procedures quality, necessity for protective clothing and essentiality of a subtask; and (3) environment variables include temperature of the workplace, radiation level, and noise levels. The output describing maintainer performance includes subtask and task identification, success proportion, work and wait durations, time spent repeating various subtasks and outcome in terms of errors detected by the crew, false alarms, undetected errors, duration, and the probability of success. The model is comprehensive and allows for the modeling of decision making, trouble-shooting and branching of tasks

  3. Development of a PC code package for the analysis of research and power reactors

    International Nuclear Information System (INIS)

    Urli, N.

    1992-06-01

    Computer codes available for performing reactor physics calculations for nuclear research reactors and power reactors are normally suited for running on mainframe computers. With the fast development in speed and memory of the PCs and affordable prices it became feasible to develop PC versions of commonly used codes. The present work performed under an IAEA sponsored research contract has successfully developed a code package for running on a PC. This package includes a cross-section generating code PSU-LEOPARD and 2D and 1D spatial diffusion codes, MCRAC and MCYC 1D. For adapting PSU-LEOPARD for a PC, the binary library has been reorganized to decimal form, upgraded to FORTRAN-77 standard and arrays and subroutines reorganized to conform to PC compiler. Similarly PC version of MCRAC for FORTRAN-77 and 1D code MCYC 1D have been developed. Tests, verification and bench mark results show excellent agreement with the results obtained from mainframe calculations. The execution speeds are also very satisfactory. 12 refs, 4 figs, 3 tabs

  4. Enterprise logic vs product logic: the development of GE’s computer product line

    OpenAIRE

    Gandy, Anthony; Edwards, Roy

    2017-01-01

    The following article focuses on corporate strategies at General Electric (GE) and how corporate-level interventions impacted the market performance of the firm’s general purpose commercial mainframe product set in the period 1960–1968. We show that in periods of both divisional independent planning and corporate-level planning strategic governance, central decisions interfered in the execution of GE’s product strategy. GE’s institutional ‘enterprise logic’ negatively impacted the ‘product lo...

  5. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  6. RIMS [Records Inventory Management System] Handbook

    International Nuclear Information System (INIS)

    1989-03-01

    The Records Inventory Management System (RIMS) is a computer library of abstracted documents relating to low-level radioactive waste. The documents are of interest to state governments, regional compacts, and the Department of Energy, especially as they relate to the Low-Level Radioactive Waste Policy Act requiring states or compacts of states to establish and operate waste disposal facilities. RIMS documents are primarily regulatory, policy, or technical documents, published by the various states and compacts of the United States; however, RIMS contains key international publications as well. The system has two sections: a document retrieval section and a document update section. The RIMS mainframe can be accessed through a PC or modem. Also, each state and compact may request a PC version of RIMS, which allows a user to enter documents off line and then upload the documents to the mainframe data base

  7. Conversion and Retrievability of Hard Copy and Digital Documents on Optical Disks

    Science.gov (United States)

    1992-03-01

    53 B. CURRENT THESIS PREPARATION TOOLS . ....... 54 1. Thesis Preparation using G-Thesis ..... 55 2. Thesis Preparation using Framemaker ...School mainframe. • Computer Science department students can use a software package called Framemaker , available on Sun work stations in their...by most thesis typists and students. For this reason, the discussion of thesis preparation tools will be limited to; G-thesis, Framemaker and

  8. Environmental Gradient Analysis, Ordination, and Classification in Environmental Impact Assessments.

    Science.gov (United States)

    1987-09-01

    agglomerative clustering algorithms for mainframe computers: (1) the unweighted pair-group method that V uses arithmetic averages ( UPGMA ), (2) the...hierarchical agglomerative unweighted pair-group method using arithmetic averages ( UPGMA ), which is also called average linkage clustering. This method was...dendrograms produced by weighted clustering (93). Sneath and Sokal (94), Romesburg (84), and Seber• (90) also strongly recommend the UPGMA . A dendrogram

  9. Measurement of Loneliness Among Clients Representing Four Stages of Cancer: An Exploratory Study.

    Science.gov (United States)

    1985-03-01

    status, and membership in organizations for each client were entered into a SPSS program in a mainframe computer . The means and a one-way analysis of...Study 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(e) S. CONTRACT OR GRANT NUMBER(&) Suanne Smith 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ...27 Definitions of Terms .......... . . . . 28 II. MErODOLOGY . . . . . . . . . . ......... 30 Overviev of Design

  10. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  11. Putting all that (HEP-) data to work - a REAL implementation of an unlimited computing and storage architecture

    International Nuclear Information System (INIS)

    Ernst, Michael

    1996-01-01

    Since computing in HEP left the Mainframe-Path, many institutions demonstrated a successful migration to workstation-based computing, especially for applications requiring a high CPU-to-I/O ratio. However, the difficulties and the complexity starts beyond just providing CPU-Cycles. Critical applications, requiring either sequential access to large amounts of data or to many small sets out of a multi 10-Terabyte Data Repository need technical approaches we have not had so far. Though we felt that we were hardly able to follow technology evolving in the various fields, we recently had to realize that even politics overtook technical evolution - at least in the areas mentioned above. The USA is making peace with Russia. DEC is talking to IBM, SGI communicating with HP. All these things became true, and through, unfortunately, the Cold War lasted 50 years, and-in a relative sense-we were afraid that 50 years seemed to be how long any self respecting high performance computer (or a set of workstations) had to wait for data from its Server, fortunately, we are now facing a similar progress of friendliness, harmony and balance in the former problematic (computing) areas. Buzzwords, mentioned many thousand times in talks describing today's and future requirements, including Functionality, Reliability, Scalability, Modularity and Portability are not just phrases, wishes and dreams any longer. At DESY, we are in the process of demonstrating an architecture that is taking those five issues equally into consideration, including Heterogeneous Computing Platforms with ultimate file system approaches, Heterogeneous Mass Storage Devices and an Open Distributed Hierarchical Mass Storage Management System. This contribution will provide an overview on how far we got and what the next steps will be. (author)

  12. Development of interactive software for fuel management analysis

    International Nuclear Information System (INIS)

    Graves, H.W. Jr.

    1986-01-01

    Electronic computation plays a central part in engineering analysis of all types. Utilization of microcomputers for calculations that were formerly carried out on large mainframe computers presents a unique opportunity to develop software that not only takes advantage of the lower cost of using these machines, but also increases the efficiency of the engineers performing these calculations. This paper reviews the use of electronic computers in engineering analysis, discusses the potential for microcomputer utilization in this area, and describes a series of steps to be followed in software development that can yield significant gains in engineering design efficiency

  13. Plasma science and technology: 50 years of progress

    International Nuclear Information System (INIS)

    Cecchi, J.L.; Cecchi, L.M.

    2003-01-01

    During the first 50 years of the American Vacuum Society (1953-2003), technology has advanced greatly. But the most impressive and conspicuous advance has been in computational power, due to the exponential increase in computing power at an exponentially decreasing cost per function. Many of our readers will remember the 'Macro-Mechanical System' called a slide rule that was symbolic of engineers and scientists in the 1950s and 1960s. From the slide rule of the past to today's desktop and notebook computers, possessing speed, storage, and power capability that dwarf the mainframes of the recent past, what a transformation!

  14. The VPI program package adapted to microcomputer for in-core fuel-management

    International Nuclear Information System (INIS)

    Sumitra, T.; Bhongsuwan, T.

    1988-01-01

    The neutron shielding analysis and design program was developed for microcomputer, by modifying the SABINE-3 shielding code which was written for mainframe computers. The program is based on removal-diffusion method and was modified from the SABINE-3 code. The program could be used to calculate shielding for nuclear reactors and neutron source. The accuracy of the program was tested by determining the neutron and gamma dose rate of a test case of Cf-252 source. The results were nearly identical with those obtained from original SABINE-3 which was computed on PRIME 9750 super minicomputer. Computing time was about 65 minutes

  15. Activity report 1988-1989

    International Nuclear Information System (INIS)

    Dagnegaard, E.; Johansson, Rolf.

    1989-12-01

    This report covers the activities of the department during the period 1 July 1988 - 30 June 1989, which is the academic year 1988/89. Research has continued in established areas such as adaptive control, computer aided control engineering, robotics, and information technology. The program package Simnon, that has been developed at the Department, is now available for mainframes, IBM-PC compatibles, and Sun workstations

  16. SEAPATH: A microcomputer code for evaluating physical security effectiveness using adversary sequence diagrams

    International Nuclear Information System (INIS)

    Darby, J.L.

    1986-01-01

    The Adversary Sequence Diagram (ASD) concept was developed by Sandia National Laboratories (SNL) to examine physical security system effectiveness. Sandia also developed a mainframe computer code, PANL, to analyze the ASD. The authors have developed a microcomputer code, SEAPATH, which also analyzes ASD's. The Authors are supporting SNL in software development of the SAVI code; SAVI utilizes the SEAPATH algorithm to identify and quantify paths

  17. Information resources assessment of a healthcare integrated delivery system.

    Science.gov (United States)

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  18. Information resources assessment of a healthcare integrated delivery system.

    Science.gov (United States)

    Gadd, C S; Friedman, C P; Douglas, G; Miller, D J

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations.

  19. Towards Reengineering the United States Department of Defense: A Financial Case for a Functionally-Aligned, Unified Military Structure

    Science.gov (United States)

    2014-03-01

    market share, especially in mainframe computing, IBM was not immune to hardship. In the early 1990s, IBM began losing market shares to upstart...healthy and beneficial for the wellbeing of the US Military. The principles of free market economics dictate that competition between firms provides...a better product or service at a better price over monopolistic or socialistic settings. The argument could be made of competition between the

  20. HVAC optimization as facility requirements change with corporate restructuring

    Energy Technology Data Exchange (ETDEWEB)

    Rodak, R.R.; Sankey, M.S.

    1997-06-01

    The hyper-competitive, dynamic 1990`s forced many corporations to {open_quotes}Right-Size,{close_quotes} relocating resources and equipment -- even consolidating. These changes led to utility reduction if HVAC optimization was thoroughly addressed, and energy conservation opportunities were identified and properly designed. This is true particularly when the facility`s heating and cooling systems are matched to correspond with the load changes attributed to the reduction of staff and computers. Computers have been downsized and processing power per unit of energy input increased, thus, the need for large mainframe computer centers, and their associated high intensity energy usage, have been decreased or eliminated. Cooling, therefore, also has been reduced.

  1. A real time multi-server multi-client coherent database for a new high voltage system

    International Nuclear Information System (INIS)

    Gorbics, M.; Green, M.

    1995-01-01

    A high voltage system has been designed to allow multiple users (clients) access to the database of measured values and settings. This database is actively maintained in real time for a given mainframe containing multiple modules each having their own database. With limited CPU nd memory resources the mainframe system provides a data coherency scheme for multiple clients which (1) allows the client to determine when and what values need to be updated, (2) allows for changes from one client to be detected by another client, and (3) does not depend on the mainframe system tracking client accesses

  2. Development of a package program for estimating ground level concentrations of radioactive gases

    International Nuclear Information System (INIS)

    Nilkamhang, W.

    1986-01-01

    A package program for estimating ground level concentration of radioactive gas from elevate release was develop for use on IBM P C microcomputer. The main program, GAMMA PLUME NT10, is based on the well known VALLEY MODEL which is a Fortran computer code intended for mainframe computers. Other two options were added, namely, calculation of radioactive gas ground level concentration in Ci/m 3 and dose equivalent rate in mren/hr. In addition, a menu program and editor program were developed to render the program easier to use since the option could be readily selected and the input data could be easily modified as required through the keyboard. The accuracy and reliability of the program is almost identical to the mainframe. Ground level concentration of radioactive radon gas due to ore program processing in the nuclear chemistry laboratory of the Department of Nuclear Technology was estimated. In processing radioactive ore at a rate of 2 kg/day, about 35 p Ci/s of radioactive gas was released from a 14 m stack. When meteorological data of Don Muang (average for 5 years 1978-1982) were used maximum ground level concentration and the dose equivalent rate were found to be 0.00094 p Ci/m 3 and 5.0 x 10 -10 mrem/hr respectively. The processing time required for the above problem was about 7 minutes for any case of source on IBM P C which was acceptable for a computer of this class

  3. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    International Nuclear Information System (INIS)

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests

  4. ORIGNATE: PC input processor for ORIGEN-S

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1992-01-01

    ORIGNATE is a personal computer program that serves as a user- friendly interface for the ORIGEN-S isotopic generation and depletion code. It is designed to assist an ORIGEN-S user in preparing an input file for execution of light-water-reactor fuel depletion and decay cases. Output from ORIGNATE is a card-image input file that may be uploaded to a mainframe computer to execute ORIGEN-S in SCALE-4. ORIGNATE features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up an ORIGEN-S input file and perform error checking

  5. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard, M.A.; Sommer, S.C. [Lawrence Livermore National Lab., CA (United States)

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  6. OFFSCALE: PC input processor for SCALE-4 criticality sequences

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1991-01-01

    OFFSCALE is a personal computer program that serves as a user-friendly interface for the Criticality Safety Analysis Sequences (CSAS) available in SCALE-4. It is designed to assist a SCALE-4 user in preparing an input file for execution of criticality safety problems. Output from OFFSCALE is a card-image input file that may be uploaded to a mainframe computer to execute the CSAS4 control module in SCALE-4. OFFSCALE features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up a CSAS4 input file and perform data checking

  7. CICS Region Virtualization for Cost Effective Application Development

    Science.gov (United States)

    Khan, Kamal Waris

    2012-01-01

    Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…

  8. Advertising assessment -- myth or reality?

    OpenAIRE

    C D Beaumont; K Geary; C Halliburton; D Clifford; R Rivers

    1989-01-01

    In this paper the topic of advertising assessment is revisited, given the widespread availability of low-cost microcomputer modelling developments. It is recognised that when regression analysis became popular in the 1970s with the advent of the mainframe computer, much hype and little marketing benefit ensued. It is argued that simply speeding up the old practices of the 1970s, which rightly fell from favour, will provide no benefit to the advertising industry. 'What is new', the 'benefits' ...

  9. IMEC pushes the limits of CMOS

    OpenAIRE

    George Marsh

    2002-01-01

    An evolution that started with thermionic valve-based mainframe computers and can now put multiples of their processing power into a modern lap-top is today proceeding towards ambient intelligence, in which ever more compact processing will be all around us and, quite literally, part of the furniture. Convergence of information and communications technology (ICT), today exemplified by the conjunction of GSM (global system for mobile) telephony with ‘palm’ PC power, will go much further as mic...

  10. Carolina Power and Light Company's computerized Radiological Information Management System

    International Nuclear Information System (INIS)

    Meyer, B.A.

    1987-01-01

    Carolina Power and Lignt Company has recently implement a new version of their computerized Radiological Information management System. The new version was programmed in-house and is run on the Company's mainframe computers. In addition to providing radiation worker dose histories and current dose updates, the system provides real-time access control for all three of the Company's nuclear plants such as respirator and survey equipment control and inventory, TLD QC records, and many other functions

  11. A reconfigurable strategy for distributed digital process control

    International Nuclear Information System (INIS)

    Garcia, H.E.; Ray, A.; Edwards, R.M.

    1990-01-01

    A reconfigurable control scheme is proposed which, unlike a preprogrammed one, uses stochastic automata to learn the current operating status of the environment (i.e., the plant, controller, and communication network) by dynamically monitoring the system performance and then switching to the appropriate controller on the basis of these observations. The potential applicability of this reconfigurable control scheme to electric power plants is being investigated. The plant under consideration is the Experimental Breeder Reactor (EBR-II) at the Argonne National Laboratory site in Idaho. The distributed control system is emulated on a ring network where the individual subsystems are hosted as follows: (1) the reconfigurable control modules are located in one of the network modules called Multifunction Controller; (2) the learning modules are resident in a VAX 11/785 mainframe computer; and (3) a detailed model of the plant under control is executed in the same mainframe. This configuration is a true representation of the network-based control system in the sense that it operates in real time and is capable of interacting with the actual plant

  12. KWU Nuclear Plant Analyzer

    International Nuclear Information System (INIS)

    Bennewitz, F.; Hummel, R.; Oelmann, K.

    1986-01-01

    The KWU Nuclear Plant Analyzer is a real time engineering simulator based on the KWU computer programs used in plant transient analysis and licensing. The primary goal is to promote the understanding of the technical and physical processes of a nuclear power plant at an on-site training facility. Thus the KWU Nuclear Plant Analyzer is available with comparable low costs right at the time when technical questions or training needs arise. This has been achieved by (1) application of the transient code NLOOP; (2) unrestricted operator interaction including all simulator functions; (3) using the mainframe computer Control Data Cyber 176 in the KWU computing center; (4) four color graphic displays controlled by a dedicated graphic computer, no control room equipment; and (5) coupling of computers by telecommunication via telephone

  13. Interface design of VSOP'94 computer code for safety analysis

    International Nuclear Information System (INIS)

    Natsir, Khairina; Andiwijayakusuma, D.; Wahanani, Nursinta Adi; Yazid, Putranto Ilham

    2014-01-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects

  14. Interface design of VSOP'94 computer code for safety analysis

    Science.gov (United States)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  15. Industry X.0 : Reimaging industrial development

    CSIR Research Space (South Africa)

    Zachar, H

    2017-10-01

    Full Text Available Quantum Mainframe Client-server & PCs Web 1.0 ecommerce Web 2.0, cloud, mobile Big data, analytics, visualization IoT & smart machines Artificial intelligence Quantum computing 6 T o d a y 1950 1960 1970 1980 1990 2000 2010 2020... NEW SET OF CORE CAPABILITIES TO SUCCEED INDUSTRY X.0 INDUSTRY 4.0 EFFICIENCIES HYPER-PERSONALIZED EXPERIENCES & NEW SOURCES OF GROWTH SOCIAL MEDIA CLOUD ANALYTICS MOBILITY WEB 1.0/E-COMMERCE PCS, SERVERS AND DEDICATED HARDWARE PCS...

  16. Industry X.0 : Reimaging industrial development.

    CSIR Research Space (South Africa)

    Zachar, H

    2017-10-01

    Full Text Available Quantum Mainframe Client-server & PCs Web 1.0 ecommerce Web 2.0, cloud, mobile Big data, analytics, visualization IoT & smart machines Artificial intelligence Quantum computing 6 T o d a y 1950 1960 1970 1980 1990 2000 2010 2020... NEW SET OF CORE CAPABILITIES TO SUCCEED INDUSTRY X.0 INDUSTRY 4.0 EFFICIENCIES HYPER-PERSONALIZED EXPERIENCES & NEW SOURCES OF GROWTH SOCIAL MEDIA CLOUD ANALYTICS MOBILITY WEB 1.0/E-COMMERCE PCS, SERVERS AND DEDICATED HARDWARE PCS...

  17. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  18. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  19. Assessment of the information content of patterns: an algorithm

    Science.gov (United States)

    Daemi, M. Farhang; Beurle, R. L.

    1991-12-01

    A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

  20. Interface design of VSOP'94 computer code for safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Natsir, Khairina, E-mail: yenny@batan.go.id; Andiwijayakusuma, D.; Wahanani, Nursinta Adi [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia); Yazid, Putranto Ilham [Center for Nuclear Technology, Material and Radiometry- National Nuclear Energy Agency, Jl. Tamansari No.71, Bandung 40132 (Indonesia)

    2014-09-30

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  1. Recent developments in the JT-60 data processing system

    International Nuclear Information System (INIS)

    Matsuda, T.; Saitoh, N.; Tsugita, T.; Oshima, T.; Sakata, S.; Sato, M.; Koiwa, M.; Watanabe, K.

    1999-01-01

    The JT-60 data processing system was originally a large computer complex system including a lot of micro-computers, several mini-computers, and a mainframe computer. Recently, several improvements have been made to the original system to modernize the system. Many sub-systems composed of aged mini-computers have been replaced with workstations by utilizing recent progress in computer and network technologies. The system can handle ∝300 MB of raw data per discharge, which is three times larger than the amount in the original system. These improvements have been applied to develop element technologies necessary to the remote participation in JT-60 experiments. A remote diagnostics control and monitoring system and a computer system for access to JT-60 data from the Internet are used together with video conferencing systems for a real-time communication. In 1996, the remote participation based on them was successfully demonstrated in collaboration with Japan Atomic Energy Research Institute, Los Alamos National Laboratory, and Princeton Plasma Physics Laboratory. (orig.)

  2. The plant design analyser and its applications

    International Nuclear Information System (INIS)

    Whitmarsh-Everiss, M.J.

    1992-01-01

    Consideration is given to the history of computational methods for the non-linear dynamic analysis of plant behaviour. This is traced from analogue to hybrid computers. When these were phased out simulation languages were used in the batch mode and the interactive computational capabilities were lost. These have subsequently been recovered using mainframe computing architecture in the context of small models using the Prototype Plant Design Analyser. Given the development of parallel processing architectures, the restriction on model size can be lifted. This capability and the use of advanced Work Stations and graphics software has enabled an advanced interactive design environment to be developed. This system is generic and can be used, with suitable graphics development, to study the dynamics and control behaviour of any plant or system for minimum cost. Examples of past and possible future uses are identified. (author)

  3. The computerised accountancy system (MYDAS) for irradiated components in RNL's Mayfair Laboratory at Culcheth

    International Nuclear Information System (INIS)

    Stansfield, R.G.; Baker, A.R.

    1985-09-01

    The computerised Mayfair Accountancy System (MYDAS) has been developed to account for irradiated components in the Mayfair Laboratory at Culcheth and supersedes a card-index system. The computerised system greatly improves the availability of the data held and it ensures, by means of extensive data validation programs, that the data accurately represent the current inventory of irradiated components in the Laboratory. The system has been implemented on the Risley ICL 2966 main-frame computer and uses an IDMS database to store the data. The computer is accessed through the facilities of the Transaction Processing Management System (TPMS) providing rapid and secure access to the database from several visual display units and printers simultaneously. (author)

  4. The mesoscale dispersion modeling system a simulation tool for development of an emergency response system

    International Nuclear Information System (INIS)

    Uliasz, M.

    1990-01-01

    The mesoscale dispersion modeling system is under continuous development. The included numerical models require further improvements and evaluation against data from meteorological and tracer field experiments. The system can not be directly applied to real time predictions. However, it seems to be a useful simulation tool for solving several problems related to planning the monitoring network and development of the emergency response system for the nuclear power plant located in a coastal area. The modeling system can be also applied to another environmental problems connected with air pollution dispersion in complex terrain. The presented numerical models are designed for the use on personal computers and are relatively fast in comparison with the similar mesoscale models developed on mainframe computers

  5. Status report

    International Nuclear Information System (INIS)

    Parsons, D.K.; Nigg, D.W.; Yoon, W.Y.

    1987-01-01

    This paper reports that as part of project to d develop a package of reactor physics codes for Personal Computers (PCs), the Idaho National Engineering Laboratory (INEL) is developing microcomputer versions of two reactor shielding codes that previously were available for mainframe computers only: QAD-CG and ANISN. QAD-CG is a point kernel code for gamma ray shielding calculations that is similar to MICROSHIELD. ANISN is a well known one-dimensional discrete ordinates transport theory code for reactor design, criticality, and shielding. Of the two, QAD-CG is most frequently used for gamma shielding calculations, while ANISN is better suited for calculations involving neutrons and/or gammas when scattering needs to be treated more accurately

  6. An expert system technology for work authorization information systems

    International Nuclear Information System (INIS)

    Munchausen, J.H.; Glazer, K.A.

    1988-01-01

    This paper describes the effort by Southern California Edison Company (SCE) and the Electric Power Research Institute (EPRI) to develop an expert systems work station designed to support the San Onofre Nuclear Generating Station (SONGS). The expert systems work station utilizes IntelliCorp KEE (Knowledge Engineering Environment) and EPRI-IntelliCorp PLEXSYS (PLant EXpert SYStem) technology, and SCE Piping and Instrumentation Diagrams (P and ID's) and host-based computer applications to assist plant operations and maintenance personnel in the development of safety tagout boundaries. Of significance in this venture is the merging of conventional computer applications technology with expert systems technology. The EPRI PLEXSYS work station will act as a front-end for the SONGS Tagout Administration and Generation System (TAGS), a conventional CICS/COBOL mainframe computer application

  7. A new radiation exposure record system

    International Nuclear Information System (INIS)

    Lyon, M.; Berndt, V.L.; Trevino, G.W.; Oakley, B.M.

    1993-04-01

    The Hanford Radiological Records Program (HRRP) serves all Hanford contractors as the single repository for radiological exposure for all Hanford employees, subcontractors, and visitors. The program administers and preserves all Hanford radiation exposure records. The program also maintains a Radiation Protection Historical File which is a historical file of Hanford radiation protection and dosimetry procedures and practices. Several years ago DOE declared the existing UNIVAC mainframe computer obsolete and the existing Occupational Radiation Exposure (ORE) system was slated to be redeveloped. The new system named the Radiological Exposure (REX) System is described in this document

  8. Using C-Kermit

    CERN Document Server

    da Cruz, Frank

    2014-01-01

    An introduction and tutorial as well as a comprehensive reference Using C-Kermit describes the new release, 5A, of Columbia University's popular C-Kermit communication software - the most portable of all communication software packages. Available at low cost on a variety of magnetic media from Columbia University,C-Kermit can be used on computers of all sizes - ranging from desktop workstations to minicomputers to mainframes and supercomputers. The numerous examples, illustrations, and tables in Using C-Kermit make the powerful and versatile C-Kermit functionsaccessible for new and experienced

  9. The building blocks to the architecture of a cloud platform

    CSIR Research Space (South Africa)

    Mvelase, P

    2015-07-01

    Full Text Available Service (SaaS). The datacenter hardware and software is what is called the Cloud. When a Cloud is made available in a pay-as-you-go manner to the public, it is called a Public Cloud; the service being sold is Utility Computing. The term Private Cloud... to develop and deliver Software as a Service (SaaS). Examples of such frameworks include the following: • Different hardware architectures with different server sizes— from small, Intel-based servers to mid- or top-range servers and mainframes...

  10. Geographic information systems for the Chernobyl decision makers in Ukraine

    International Nuclear Information System (INIS)

    Palko, S.; Glieca, M.; Dombrowski, A.

    1997-01-01

    Following numerous national and international studies conducted on the overall impact of the 1986 Chernobyl nuclear power plant disaster, decision-makers of the affected countries have oriented their efforts on environmental clean-up and population safety. They have focused on activities leading to a better understanding of radionuclide contamination and to the development of effective environmental rehabilitation programs. Initial developments involved the use of domestic USSR technologies consisting of mainframe IBM computers and DEC minicomputers. Later, personal computers with imported software packages were introduced into the decision-making process. Following the breakup of the former USSR, the Ministry of Chernobyl was created in Ukraine in 1991. One of the Ministry's mandate was the elimination of the environmental after-effects of the Chernobyl disaster

  11. Originate: PC input processor for origen-S

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1994-01-01

    ORIGINATE is a personal computer program developed at Oak Ridge National Laboratory to serve as a user-friendly interface for the ORIGEN-S isotopic generation and depletion code. It is designed to assist an ORIGEN-S user in preparing an input file for execution of light-water-reactor fuel depletion and decay cases. Output from ORIGINATE is a card-image input file that may be uploaded to a mainframe computer to execute ORIGEN-S in SCALE-4. ORIGINATE features a pull down menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up an ORIGEN-S input file and perform error checking. This capability increases productivity and decreases chance of user error. (authors). 6 refs., 3 tabs

  12. Workstations take over conceptual design

    Science.gov (United States)

    Kidwell, George H.

    1987-01-01

    Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.

  13. Linguistics and the digital humanities

    DEFF Research Database (Denmark)

    Jensen, Kim Ebensgaard

    2014-01-01

    Corpus linguistics has been closely intertwined with digital technology since the introduction of university computer mainframes in the 1960s. Making use of both digitized data in the form of the language corpus and computational methods of analysis involving concordancers and statistics software......, corpus linguistics arguably has a place in the digital humanities. Still, it remains obscure and figures only sporadically in the literature on the digital humanities. This article provides an overview of the main principles of corpus linguistics and the role of computer technology in relation to data...... and method and also offers a bird's-eye view of the history of corpus linguistics with a focus on its intimate relationship with digital technology and how digital technology has impacted the very core of corpus linguistics and shaped the identity of the corpus linguist. Ultimately, the article is oriented...

  14. FIRINPC and FIRACPC graphics post-processor support user's guide and programmer's reference

    International Nuclear Information System (INIS)

    Hensel, E.

    1992-03-01

    FIRIN is a computer program used by DOE fire protection engineers to simulate hypothetical fire accidents in compartments at DOE facilities. The FIRIN code is typically used in conjunction with a ventilation system code such as FIRAC, which models the impact of the fire compartment upon the rest of the system. The code described here, FIRINPC is a PC based implementation of the full mainframe code FIRIN. In addition, FIRINPC contains graphics support for monitoring the progress of the simulation during execution and for reviewing the complete results of the simulation upon completion of the run. This document describes how to install, test, and subsequently use the code FIRINPC, and addresses differences in usage between the PC version of the code and its mainframe predecessor. The PC version contains all of the modeling capabilities of the earlier version, with additional graphics support. This user's guide is a supplement to the original FIRIN report published by the NRC. FIRAC is a computer program used by DOE fire protection engineers to simulate the transient response of complete ventilation system to fire induced transients. FIRAC has the ability to use the FIRIN code as the driving function or source term for the ventilation system response. The current version of FIRAC does not contain interactive graphics capabilities. A third program, called POST, is made available for reviewing the results of a previous FIRIN or FIRAC simulation, without having to recompute the numerical simulation. POST uses the output data files created by FIRINPC and FIRACPC to avoid recomputation

  15. Migration of the UNIX Application for eFAST CANDU Nuclear Power Plant Analyzer

    International Nuclear Information System (INIS)

    Suh, Jae Seung; Sohn, Dae Seong; Kim, Sang Jae; Jeun, Gyoo Dong

    2006-01-01

    Since the mid 1980s, corporate data centers have been moving away from mainframes running dedicated operating systems to mini-computers, often using one or other of the myriad flavors of UNIX. At the same time, the users' experience of these systems has, in many cases, stayed the same, involving text-based interaction with dumb terminals or a terminal-emulation session on a Personal Computer. More recently, IT managers have questioned this approach, and have been looking at changes in the UNIX marketplace and the increasing expense of being tied in to single-vendor software and hardware solutions. The growth of Linux as a lightweight version of UNIX has fueled this interest, raising the number of organizations that are considering a migration to alternative platforms. The various implementations of the UNIX operating system have served industry well, as witnessed by the very large base both of installed systems and large-scale applications installed on those systems. However, there are increasing signs of dissatisfaction with expensive, often proprietary solutions and a growing sense that perhaps the concept of 'big iron' has had its day in the same way as it has for most of the mainframes of the type portrayed in 1970s science fiction films. One of the most extraordinary and unexpected successes of the Intel PC architecture is the extent to which this basic framework has been extended to encompass very large server and data center environments. Large-scale hosting companies are now offering enterprise level services to multiple client companies at availability levels of over 99.99 percent on what are simply racks of relatively cheap PCs. Technologies such as clustering, Network Load Balancing, and Component Load Balancing enable the personal computer to take on and match the levels of throughput, availability, and reliability of all but the most expensive 'big iron' solutions and the supercomputers

  16. TITUS: a general finite element system

    International Nuclear Information System (INIS)

    Bougrelle, P.

    1983-01-01

    TITUS is a general finite element structural analysis system which performs linear/non-linear, static/dynamic analyses of heat-transfer/thermo-mechanical problems. One of the major features of TITUS is that it was designed by engineers, to address engineers in an industrial environment. This has resulted in an easy to use system, with a high-level free-formatted problem oriented language, a large selection of pre- and post processors and sophisticated graphic capabilities. TITUS has many references in civil, mechanical and nuclear engineering applications. The TITUS system is available on various types of machines, from large mainframes to mini computers

  17. RELAP5/MOD2 implementation on various mainframes including the IBM and SX-2 supercomputer

    International Nuclear Information System (INIS)

    DeForest, D.L.; Hassan, Y.A.

    1987-01-01

    The RELAP5/MOD2 (cycle 36.04) code is a one-dimensional, two-fluid, nonequilibrium, nonhomogeneous transient analysis code designed to simulate operational and accident scenarios in pressurized water reactors (PWRs). System models are solved using a semi-implicit finite difference method. The code was developed at EG and G in Idaho Falls under sponsorship of the US Nuclear Regulatory Commission (NRC). The major enhancement from RELAP5/MOD1 is the use of a six-equation, two-fluid nonequilibrium and nonhomogeneous model. Other improvements include the addition of a noncondensible gas component and the revision and addition of drag formulation, wall friction, and wall heat transfer. Several test cases were run to benchmark the IBM and SX-2 installations against the CDC computer and the CRAY-2 and CRAY/XMP. These included the Edward's pipe blow-down and two separate reflood cases developed to simulate the FLECHT-SEASET reflood test 31504 and a postcritical heat flux (CHF) test performed at Lehigh University

  18. Nuclear Plant Analyzer development at the Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Laats, E.T.; Beelman, R.J.; Charlton, T.R.; Hampton, N.L.; Burtt, J.D.

    1985-01-01

    The Nuclear Plant Analyzer (NPA) is a state-of-the-art safety analysis and engineering tool being used to address key nuclear power plant safety issues. The NPA has been developed to integrate the NRC's computerized reactor behavior simulation codes such as RELAP5, TRAC-BWR, and TRAC-PWR, with well-developed computer graphics programs and large repositories of reactor design and experimental data. An important feature of the NAP is the capability to allow an analyst to redirect a RELAP5 or TRAC calculation as it progresses through its simulated scenario. The analyst can have the same power plant control capabilities as the operator of an actual plant. The NPA resides on the dual CDS Cyber-176 mainframe computers at the INEL and is being converted to operate on a Cray-1S computer at the LANL. The subject of this paper is the program conducted at the INEL

  19. Scientific visualization and radiology

    International Nuclear Information System (INIS)

    Lawrance, D.P.; Hoyer, C.E.; Wrestler, F.A.; Kuhn, M.J.; Moore, W.D.; Anderson, D.R.

    1989-01-01

    Scientific visualization is the visual presentation of numerical data. The National Center for Supercomputing Applications (NCSA) has developed methods for visualizing computerbased simulations of digital imaging data. The applicability of these various tools for unique and potentially medical beneficial display of MR images is investigated. Raw data are obtained from MR images of the brain, neck, spine, and brachial plexus obtained on a 1.5-T imager with multiple pulse sequences. A supercomputer and other mainframe resources run a variety of graphic and imaging programs using this data. An interdisciplinary team of imaging scientists, computer graphic programmers, an physicians works together to achieve useful information

  20. The role of nuclear reaction theory and data in nuclear energy and safety applications

    International Nuclear Information System (INIS)

    Schmidt, J.J.

    1993-01-01

    The nuclear data requirements for nuclear fission reactor design and safety computations are so large that they cannot be satisfied by experimental measurements alone. Nuclear reaction theories and models have recently been developed and refined to the extent, that, with suitable parametrisation and fitting to accurately known experimental data, they can be used for filling gaps in the available experimental nuclear data base as well as for bulk computations of nuclear reaction, e.g. activation cross sections. The concurrent rapid development of ever more powerful mainframe and personal computers has stimulated the development of comprehensive nuclear model computer codes. A representative selection of such codes will be presented in the lectures and computer exercises of this Workshop. In order to fulfill nuclear data requirements of the nineties and, at the same time, develop improved tools for nuclear physics teaching at developing country universities it will be required and a major future task of the IAEA nuclear data programme to develop computer files of ''best'' sets of nuclear parameters for standardised input to nuclear model computations of nuclear data. Nuclear scientists from developing countries can make substantial contributions to this project. (author). 25 refs

  1. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  2. Nuclear Plant Analyzer development at the Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Laats, E.T.

    1986-10-01

    The Nuclear Plant Analyzer (NPA) is a state-of-the-art safety analysis and engineering tool being used to address key nuclear power plant safety issues. Under the sponsorship of the US Nuclear Regulatory Commission (NRC), the NPA has been developed to integrate the NRC's computerized reactor behavior simulation codes such as RELAP5, TRAC-BWR and TRAC-PWR, with well-developed computer color graphics programs and large repositories of reactor design and experimental data. An important feature of the NPA is the capability to allow an analyst to redirect a RELAP5 or TRAC calculation as it progresses through its simulated scenario. The analyst can have the same power plant control capabilities as the operator of an actual plant. The NPA resides on the dual Control Data Corporation Cyber 176 mainframe computers at the Idaho National Engineering Laboratory and Cray-1S computers at the Los Alamos National Laboratory (LANL) and Kirtland Air Force Weapons Laboratory (KAFWL)

  3. EPRI engineering workstation software - Discussion and demonstration

    International Nuclear Information System (INIS)

    Stewart, R.P.; Peterson, C.E.; Agee, L.J.

    1992-01-01

    Computing technology is undergoing significant changes with respect to engineering applications in the electric utility industry. These changes result mainly from the introduction of several UNIX workstations that provide mainframe calculational capability at much lower costs. The workstations are being coupled with microcomputers through local area networks to provide engineering groups with a powerful and versatile analysis capability. PEGASYS, the Professional Engineering Graphic Analysis System, is a software package for use with engineering analysis codes executing in a workstation environment. PEGASYS has a menu driven, user-friendly interface to provide pre-execution support for preparing unput and graphical packages for post-execution analysis and on-line monitoring capability for engineering codes. The initial application of this software is for use with RETRAN-02 operating on an IBM RS/6000 workstation using X-Windows/UNIX and a personal computer under DOS

  4. Structured Assessment Approach: a microcomputer-based insider-vulnerability analysis tool

    International Nuclear Information System (INIS)

    Patenaude, C.J.; Sicherman, A.; Sacks, I.J.

    1986-01-01

    The Structured Assessment Approach (SAA) was developed to help assess the vulnerability of safeguards systems to insiders in a staged manner. For physical security systems, the SAA identifies possible diversion paths which are not safeguarded under various facility operating conditions and insiders who could defeat the system via direct access, collusion or indirect tampering. For material control and accounting systems, the SAA identifies those who could block the detection of a material loss or diversion via data falsification or equipment tampering. The SAA, originally desinged to run on a mainframe computer, has been converted to run on a personal computer. Many features have been added to simplify and facilitate its use for conducting vulnerability analysis. For example, the SAA input, which is a text-like data file, is easily readable and can provide documentation of facility safeguards and assumptions used for the analysis

  5. TMAP4 User`s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R.; Holland, D.F.; Jones, J.L.; Merrill, B.J.

    1992-06-12

    The Tritium Migration Analysis Program, Version 4 (TMAP4) has been developed by the Fusion Safety Program at the Idaho National Engineering Laboratory (INEL) as a safety analysis code, mainly to analyze tritium retention and loss in fusion reactor structures and systems during normal operation and accident conditions. TMAP4 incorporates one-dimensional thermal- and mass-diffusive transport and trapping calculations through structures and zero dimensional fluid transport between enclosures and across the interface between enclosures and structures. A key feature is the ability to input problem definition parameters as constants, interpolation tables, or FORTRAN equations. The code is specifically intended for use under a DOS operating system on PC-type mini-computers, but it has also been run successfully on workstations and mainframe computer systems. Use of the equation-input feature requires access to a FORTRAN-77 compiler and a linker program.

  6. TMAP4 User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R.; Holland, D.F.; Jones, J.L.; Merrill, B.J.

    1992-06-12

    The Tritium Migration Analysis Program, Version 4 (TMAP4) has been developed by the Fusion Safety Program at the Idaho National Engineering Laboratory (INEL) as a safety analysis code, mainly to analyze tritium retention and loss in fusion reactor structures and systems during normal operation and accident conditions. TMAP4 incorporates one-dimensional thermal- and mass-diffusive transport and trapping calculations through structures and zero dimensional fluid transport between enclosures and across the interface between enclosures and structures. A key feature is the ability to input problem definition parameters as constants, interpolation tables, or FORTRAN equations. The code is specifically intended for use under a DOS operating system on PC-type mini-computers, but it has also been run successfully on workstations and mainframe computer systems. Use of the equation-input feature requires access to a FORTRAN-77 compiler and a linker program.

  7. Computer operating systems: HEPiX news

    International Nuclear Information System (INIS)

    Silverman, Alan

    1995-01-01

    In October the North American and European Chapters of HEPiX (the HEP UNIX group established to share worldwide high energy physics experience in using the UNIX operating system - March 1994, page 18), held meetings at Fermilab and Saclay. The two-day Fermilab meeting attracted over 30 attendees from some 12 sites in the US, as well as representation from CERN. The three-day European meeting two weeks later was attended by some 70 people from 30 sites in Europe, the US and Japan. Both meetings featured some common themes such as the growth in the use of AFS (the Andrew File System) for distributed access to central filebases, and the continuing trend away from mainframes towards farms of UNIX workstations and/or servers. Other topics of interest included an update of the POSIX standards efforts, an online presentation of an experimental graphics interface, first impressions of new utility for batch job control in UNIX, the latest news on the spread of the HEPiX login scripts and a review of trends in magnetic tape technology. Detailed minutes are in preparation and will be published in the HEPNET.HEPiX news group in due course. In the meantime, the transparencies presented in many of the sessions at both conferences can be consulted via the World-Wide Web at URL http://wwwcn.cern.ch/hepix/ meetings.html

  8. Computer operating systems: HEPiX news

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, Alan

    1995-01-15

    In October the North American and European Chapters of HEPiX (the HEP UNIX group established to share worldwide high energy physics experience in using the UNIX operating system - March 1994, page 18), held meetings at Fermilab and Saclay. The two-day Fermilab meeting attracted over 30 attendees from some 12 sites in the US, as well as representation from CERN. The three-day European meeting two weeks later was attended by some 70 people from 30 sites in Europe, the US and Japan. Both meetings featured some common themes such as the growth in the use of AFS (the Andrew File System) for distributed access to central filebases, and the continuing trend away from mainframes towards farms of UNIX workstations and/or servers. Other topics of interest included an update of the POSIX standards efforts, an online presentation of an experimental graphics interface, first impressions of new utility for batch job control in UNIX, the latest news on the spread of the HEPiX login scripts and a review of trends in magnetic tape technology. Detailed minutes are in preparation and will be published in the HEPNET.HEPiX news group in due course. In the meantime, the transparencies presented in many of the sessions at both conferences can be consulted via the World-Wide Web at URL http://wwwcn.cern.ch/hepix/ meetings.html.

  9. Framework of the NPP I and C Security for Regulatory Guidance

    International Nuclear Information System (INIS)

    Kim, Young Mi; Jeong, Choong Heui

    2013-01-01

    I and C (Instrumentation and control) systems which have computers are a critical part of the safety and security at nuclear facilities. As the use of computers in I and C continue to grow, so does the target for cyber-attack. They include desktop computers, mainframe systems, servers, network devices, embedded systems and programmable logic controllers (PLSs) and other digital computer systems. As the Stuxnet malware shows, I and C systems of the NPPs are no longer safe from the threat of cyber-attacks. These digital I and C systems must be protected from the cyber-attacks. This paper presents framework of the NPP I and C security for regulatory guidance. KINS regulatory guideline 8.22 has been applied to new and operation nuclear power plants. This guideline refers the applicable scope of the cyber security activities, cyber security policies and security plans, and assessments of cyber security and execution of the cyber security activities. Newly developed guideline will be helpful for implement security control to ensure safe operation of NPP I and C systems

  10. SunFast: A sun workstation based, fuel analysis scoping tool for pressurized water reactors

    International Nuclear Information System (INIS)

    Bohnhoff, W.J.

    1991-05-01

    The objective of this research was to develop a fuel cycle scoping program for light water reactors and implement the program on a workstation class computer. Nuclear fuel management problems are quite formidable due to the many fuel arrangement options available. Therefore, an engineer must perform multigroup diffusion calculations for a variety of different strategies in order to determine an optimum core reload. Standard fine mesh finite difference codes result in a considerable computational cost. A better approach is to build upon the proven reliability of currently available mainframe computer programs, and improve the engineering efficiency by taking advantage of the most useful characteristic of workstations: enhanced man/machine interaction. This dissertation contains a description of the methods and a user's guide for the interactive fuel cycle scoping program, SunFast. SunFast provides computational speed and accuracy of solution along with a synergetic coupling between the user and the machine. It should prove to be a valuable tool when extensive sets of similar calculations must be done at a low cost as is the case for assessing fuel management strategies. 40 refs

  11. Framework of the NPP I and C Security for Regulatory Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Mi; Jeong, Choong Heui [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-10-15

    I and C (Instrumentation and control) systems which have computers are a critical part of the safety and security at nuclear facilities. As the use of computers in I and C continue to grow, so does the target for cyber-attack. They include desktop computers, mainframe systems, servers, network devices, embedded systems and programmable logic controllers (PLSs) and other digital computer systems. As the Stuxnet malware shows, I and C systems of the NPPs are no longer safe from the threat of cyber-attacks. These digital I and C systems must be protected from the cyber-attacks. This paper presents framework of the NPP I and C security for regulatory guidance. KINS regulatory guideline 8.22 has been applied to new and operation nuclear power plants. This guideline refers the applicable scope of the cyber security activities, cyber security policies and security plans, and assessments of cyber security and execution of the cyber security activities. Newly developed guideline will be helpful for implement security control to ensure safe operation of NPP I and C systems.

  12. Improvement of nuclear core power distribution analysis for Ulchin unit 1 and 2

    International Nuclear Information System (INIS)

    Chang, Jong Hwa; Zee, Sung Kyoon; Lee, Sang Ho; Park, Yong Soo; Lee, Chang Ho; Choi, Young Kil; Kim, Hee Kyung; Cho, Young Sik; Cho, In Hang

    1994-12-01

    The FMCP package which does the power shape analysis of Framatom type of reactors on IBM mainframe, is migrated to an IBM-PC system. This report describes the related technique and works. An IBM-PC software DAP is developed to replace the plant computer attached 8 inch floppy disk driver to an IBM-PC. Other programs of FMCP, such as CEDRIC, CARIN and ESTHER, are also migrated to an IBM-PC using Lahey Fortran 77 compiler. A few auxiliary programs are also developed for easy handling of FMCP in the IBM-PC environment. This report describes the usage of the developed system as well as the migration related techniques. (Author) 4 figs

  13. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  14. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    International Nuclear Information System (INIS)

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler

  15. Short Comm.

    African Journals Online (AJOL)

    OGECHI

    mainframe-based applications, incompatible proprietary hardware platforms, disparate software, ... responding to changing organizational structures; ... (2) The various technologies and equipment that manipulate these resources, and.

  16. Scientific Graphical Displays on the Macintosh

    Energy Technology Data Exchange (ETDEWEB)

    Grotch, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    In many organizations scientists have ready access to more than one computer, often both a workstation (e.g., SUN, HP, SGI) as well as a Macintosh or other PC. The scientist commonly uses the work station for `number-crunching` and data analysis whereas the Macintosh is relegated to either word processing or serves as a `dumb terminal` to a larger main-frame computer. In an informal poll of my colleagues, very few of them used their Macintoshes for either statistical analysis or for graphical data display. I believe that this state of affairs is particularly unfortunate because over the last few years both the computational capability, and even more so, the software availability for the Macintosh have become quite formidable. In some instances, very powerful tools are now available on the Macintosh that may not exist (or be far too costly) on the so-called `high end` workstations. Many scientists are simply unaware of the wealth of extremely useful, `off-the-shelf` software that already exists on the Macintosh for scientific graphical and statistical analysis.

  17. Economic justification for LAN installation in an oilfield equipment manufacturing operation

    International Nuclear Information System (INIS)

    Frishmuth, R.E.; Gariepy, J.A.

    1992-01-01

    The oil field equipment manufacturing business, like any other business working in a world wide environment, is becoming interested in reducing the amount of time required to take a product from initial concept to the market place. The most recent concept being employed to achieve this goal is concurrent engineering. This paper discusses the use of local area networks connecting personal computers to facilitate both traditional engineering and manufacturing organizations and concurrent engineering concepts. The key to making either type of organizational structure work well is communication. As companies have moved away from large mainframe computers toward individual, stand alone, personal computers, communications as well as various databases have been difficult to maintain. The authors attempt to show how a LAN would help to solve problems with data integrity, communication and speed of product development. These ideas are combined with discussion of anticipated cost savings as well as LAN installation cost. The authors show that the initial cost of LAN installation can easily be justified by the costs saved in product development

  18. A MICROCOMPUTER LINEAR PROGRAMMING PACKAGE: AN ALTERNATIVE TO MAINFRAMES

    OpenAIRE

    Laughlin, David H.

    1984-01-01

    This paper presents the capabilities and limitations of a microcomputer linear programming package. The solution algorithm is a version of the revised simplex. Rapid problem entry, user ease of operation, sensitivity analyses on objective function and right hand sides are advantages. A problem size of 150 activities and 64 constraints can be solved in present form. Due to problem size, limitations and lack of parametric and integer programming routines, this package is thought to have the mos...

  19. Observations on Power-Efficiency Trends in Mobile Communication Devices

    Directory of Open Access Journals (Sweden)

    Kari Jyrkkä

    2007-03-01

    Full Text Available Computing solutions used in mobile communications equipment are similar to those in personal and mainframe computers. The key differences between the implementations at chip level are the low leakage silicon technology and lower clock frequency used in mobile devices. The hardware and software architectures, including the operating system principles, are strikingly similar, although the mobile computing systems tend to rely more on hardware accelerators. As the performance expectations of mobile devices are increasing towards the personal computer level and beyond, power efficiency is becoming a major bottleneck. So far, the improvements of the silicon processes in mobile phones have been exploited by software designers to increase functionality and to cut development time, while usage times, and energy efficiency, have been kept at levels that satisfy the customers. Here we explain some of the observed developments and consider means of improving energy efficiency. We show that both processor and software architectures have a big impact on power consumption. Properly targeted research is needed to find the means to explicitly optimize system designs for energy efficiency, rather than maximize the nominal throughputs of the processor cores used.

  20. Observations on Power-Efficiency Trends in Mobile Communication Devices

    Directory of Open Access Journals (Sweden)

    Jyrkkä Kari

    2007-01-01

    Full Text Available Computing solutions used in mobile communications equipment are similar to those in personal and mainframe computers. The key differences between the implementations at chip level are the low leakage silicon technology and lower clock frequency used in mobile devices. The hardware and software architectures, including the operating system principles, are strikingly similar, although the mobile computing systems tend to rely more on hardware accelerators. As the performance expectations of mobile devices are increasing towards the personal computer level and beyond, power efficiency is becoming a major bottleneck. So far, the improvements of the silicon processes in mobile phones have been exploited by software designers to increase functionality and to cut development time, while usage times, and energy efficiency, have been kept at levels that satisfy the customers. Here we explain some of the observed developments and consider means of improving energy efficiency. We show that both processor and software architectures have a big impact on power consumption. Properly targeted research is needed to find the means to explicitly optimize system designs for energy efficiency, rather than maximize the nominal throughputs of the processor cores used.

  1. IBM-PC-based reactor neutronics analysis package

    International Nuclear Information System (INIS)

    Nigg, D.W.; Wessol, D.E.; Grimesey, R.A.; Parsons, D.K.; Wheeler, F.J.; Yoon, W.Y.; Lake, J.A.

    1985-01-01

    Technical advances over the past few years have led to a situation where a wide range of complex scientific computations can now be done on properly configured microcomputers such as the IBM-PC (personal computer). For a number of reasons, including security, economy, and user convenience, the development of a comprehensive system of reactor neutronics codes suitable for operation on the IBM-PC has been undertaken at the Idaho National Engineering Laboratory (INEL). It is anticipated that a PC-based code system could also have wide applicability in the nuclear engineering education community since conversion of software generated by national laboratories and others to college and university mainframe hardware has historically been a time-consuming process that has sometimes met with only limited success. This paper discusses the philosophy behind the INEL reactor neutronics PC code system and describes those parts of the system that are currently complete, those that are now under development, and those that are still in the planning stage

  2. A modern approach to HEP visualization - ATLASrift

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2017-01-01

    At the times when HEP computing needs were mainly fulfilled by mainframes, graphics solutions for event and detector visualizations were necessarily hardware as well as experiment specific and impossible to use anywhere outside of HEP community. A big move to commodity computing did not precipitate a corresponding move of graphics solutions to industry standard hardware and software. In this paper, we list functionalities expected from contemporary tools and describe their implementation by a specific application: ATLASrift. We start with a basic premise that HEP visualization tools should be open in practice and not only in intentions. This means that a user should not be limited to specific and little used platforms, HEP-only software packages, or experiment-specific libraries. Equally important is that no special knowledge or special access rights are needed. Using industry standard frameworks brings not only sustainability, but also good support, a lot of community contributed tools, and a possibility of ...

  3. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  4. Standard Verification System Lite (SVS Lite)

    Data.gov (United States)

    Social Security Administration — SVS Lite is a mainframe program used exclusively by the Office of Child Support Enforcement (OCSE) to perform batch SSN verifications. This process is exactly the...

  5. Using the Bootstrap Concept to Build an Adaptable and Compact Subversion Artifice

    National Research Council Canada - National Science Library

    Lack, Lindsey

    2003-01-01

    .... Early tiger teams recognized the possibility of this design and compared it to the two-card bootstrap loader used in mainframes since both exhibit the characteristics of compactness and adaptability...

  6. Techniques for automating the process of as-built reconciliation

    International Nuclear Information System (INIS)

    Skruch, B.R.; Brandt, G.B.; Denes, L.J.

    1984-01-01

    Techniques are being developed for acquisition, recording, and evaluation of as-built measurements of piping systems in nuclear power plants. The goal is to improve the efficiency with which as-built dimensions and configuration can be compared to as-designed dimensions and configuration. The approach utilizes an electronic digital ''ruler'' capable of measuring distances to 100 feet with a resolution of 1/100 of a foot. This ruler interfaces to a hand held computer. This ''electronic notebook'' also accepts alpha-numeric input from a keyboard and replaces a clipboard and pencil currently used. The electronic notebook, in turn, can transfer its data directly to a host mini or mainframe computer. Once the data is resident on the larger computer it is converted to a format compatible with an existing database system used for piping analysis and design. Using accepted tolerances for as-built deviations, the as-built data is then automatically compared to as-designed data. If reanalysis is required, the as-built data is in a compatible format to utilize existing computer analysis codes. This paper discusses the operation and interfacing of the electronic ruler, the general design of the data structures in the electronic notebook, the design of mini-computer software, and the results of preliminary testing of the system

  7. Concepts and realization of the KWU Nuclear Plant Analyzer

    International Nuclear Information System (INIS)

    Moritz, H.; Hummel, R.

    1987-01-01

    The Nuclear Plant Analyzer (NPA) is a real time simulator developed from KWU computer programs for transient and safety analysis ('engineering simulator'). The NPA has no control room, the hardware consists only of commercially available data processing devices. The KWU NPA makes available all simulator operating features such as initial conditions, free operator action and multiple malfunctions as well as freeze, snapshot, backtrack and playback, which have evolved useful training support in training simulators of all technical disciplines. The simulation program itself is running on a large mainframe computer Control Data CYBER 176 or CYBER 990 in the KWU computing center under the interactive component INTERCOM of the operating system NOS/BE. It transmits the time dependent engineering date roughly once a second to a process computer SIEMENS 300-R30E using telecommunication by telephone. The computers are coupled by an emulation of the communication protocol Mode 4A, running on the R30 computer. To this emulation a program-to-program interface via a circular buffer on the R30 was added. In the process computer data are processed and displayed graphically on 4 colour screens (560x512 pixels, 8 colours) by means of the process monitoring system DISIT. All activities at the simulator, including operator actions, are performed locally by the operator at the screens by means of function keys or dialog. (orig.)

  8. Numident Online Verification Utility (NOVU)

    Data.gov (United States)

    Social Security Administration — NOVU is a mainframe application that accesses the NUMIDENT to perform real-time SSN verifications. This program is called by other SSA online programs that serve as...

  9. Standard Verification System (SVS)

    Data.gov (United States)

    Social Security Administration — SVS is a mainframe program that accesses the NUMIDENT to perform SSN verifications. This program is called by SSA Internal applications to verify SSNs. There is also...

  10. State Lands by Administrator - County

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  11. State Lands by Administrator - Forestry

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  12. State Lands by Administrator - Ecological Services

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  13. State Lands by Administrator - Parks and Recreation

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  14. State Lands by Administrator - Wildlife

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  15. State Lands by Administrator - Trails and Waterways

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  16. State Lands by Administrator - Fisheries

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  17. State Lands by Administrator - Other DNR Units

    Data.gov (United States)

    Minnesota Department of Natural Resources — DNR land ownership and administrative interest mapped to the PLS forty level. This layer merges the DNR Control Point Generated PLS layer with IBM mainframe-based...

  18. A survey on the VXIbus and validity analyses for instrumentation and control in NPPs

    International Nuclear Information System (INIS)

    Kwon, Kee Choon; Park, Won Man

    1997-06-01

    This document presents the technical status of the VXIbus system and its interface. VMEbus, while developed as a backplane for Motorola processors, can be used for data acquisition, control and other instrumentation applications. The VXIbus and its associated standard for form, fit, and electrical interface have simplified the process of putting together automated instrumentation systems. The VXIplug and play system alliance was founded in 1993, the alliance's charter to improve the effectiveness of VXI-based solutions by increasing ease-of-use and improving the interoperability of mainframes, computers, instrumentations, and software through open, multivendor standards and practices. This technical report surveys surveys the instrumentation and control in NPPs apply to the VXI-based instruments which are studied expendability, interoperability, maintainability and other features. (author). 10 refs., 4 tabs., 25 figs

  19. Requirements for a radioactive waste data base

    International Nuclear Information System (INIS)

    Sato, Y.; Kobayashi, I.; Kikuchi, M.

    1990-01-01

    With the progress of nuclear fuel cycle in Japan, various types of radioactive waste will generate at each nuclear facility in the cycle. Therefor generated volume and stored quantity of waste will be supposed to increase. From the viewpoints of safety and public acceptance, by using mainframe computer it is necessary that the treatment of historical waste data, the estimation of generated waste volume and stored quantity and the investigation of research and development status of waste processing and disposal are carried out. This paper proposes design and development of the radioactive waste data base which is able to properly and correctly manage and grasp numerical and/or documentary information for generated radioactive waste. So the data base will be expected to use for planning the future management of radioactive waste. (author)

  20. Impact of workstations on criticality analyses at ABB combustion engineering

    International Nuclear Information System (INIS)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system

  1. Functional requirements document for NASA/MSFC Earth Science and Applications Division: Data and information system (ESAD-DIS). Interoperability, 1992

    Science.gov (United States)

    Stephens, J. Briscoe; Grider, Gary W.

    1992-01-01

    These Earth Science and Applications Division-Data and Information System (ESAD-DIS) interoperability requirements are designed to quantify the Earth Science and Application Division's hardware and software requirements in terms of communications between personal and visualization workstation, and mainframe computers. The electronic mail requirements and local area network (LAN) requirements are addressed. These interoperability requirements are top-level requirements framed around defining the existing ESAD-DIS interoperability and projecting known near-term requirements for both operational support and for management planning. Detailed requirements will be submitted on a case-by-case basis. This document is also intended as an overview of ESAD-DIs interoperability for new-comers and management not familiar with these activities. It is intended as background documentation to support requests for resources and support requirements.

  2. Model documentation report: Macroeconomic Activity Module (MAM) of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 1997 (AEO 97). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code. This document serves three purposes. First it is a reference document providing a detailed description of the NEMS MAM used for the AEO 1997 production runs for model analysts, users, and the public. Second, this report meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Third, it facilitates continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements as future projects.

  3. Analysis of indoor air quality data from East Tennessee field studies

    International Nuclear Information System (INIS)

    Dudney, C.S.; Hawthorne, A.R.

    1985-08-01

    This report presents the results of follow-up experimental activities and data analyses of an indoor air quality study conducted in 40 East Tennessee homes during 1982-1983. Included are: (1) additional experimental data on radon levels in all homes, repeat measurements in house No. 7 with elevated formaldehyde levels, and energy audit information on the participants' homes; (2) further data analyses, especially of the large formaldehyde data base, to ascertain relationships of pollutant levels vs environmental factors and house characteristics; (3) indoor air quality data base considerations and development of the study data base for distribution on magnetic media for both mainframe and desktop computer use; and (4) identification of design and data collection considerations for future field studies. A bibliography of additional publications related to this effort is also presented

  4. Prescriptive concepts for advanced nuclear materials control and accountability systems

    International Nuclear Information System (INIS)

    Whitty, W.J.; Strittmatter, R.B.; Ford, W.; Tisinger, R.M.; Meyer, T.H.

    1987-06-01

    Networking- and distributed-processing hardware and software have the potential of greatly enhancing nuclear materials control and accountability (MC and A) systems, from both safeguards and process operations perspectives, while allowing timely integrated safeguards activities and enhanced computer security at reasonable cost. A hierarchical distributed system is proposed consisting of groups of terminal and instruments in plant production and support areas connected to microprocessors that are connected to either larger microprocessors or minicomputers. These micros and/or minis are connected to a main machine, which might be either a mainframe or a super minicomputer. Data acquisition, preliminary input data validation, and transaction processing occur at the lowest level. Transaction buffering, resource sharing, and selected data processing occur at the intermediate level. The host computer maintains overall control of the data base and provides routine safeguards and security reporting and special safeguards analyses. The research described outlines the distribution of MC and A system requirements in the hierarchical system and distributed processing applied to MC and A. Implications of integrated safeguards and computer security concepts for the distributed system design are discussed. 10 refs., 4 figs

  5. Developing a Telecommunications Curriculum for Students with Physical Disabilities.

    Science.gov (United States)

    Gandell, Terry S.; Laufer, Dorothy

    1993-01-01

    A telecommunications curriculum was developed for students (ages 15-21) with physical disabilities. Curriculum content included an internal mailbox program (Mailbox), interactive communication system (Blisscom), bulletin board system (Arctel), and a mainframe system (Compuserv). (JDD)

  6. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  7. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  8. Quantifying the potential export flows of used electronic products in Macau: a case study of PCs.

    Science.gov (United States)

    Yu, Danfeng; Song, Qingbin; Wang, Zhishi; Li, Jinhui; Duan, Huabo; Wang, Jinben; Wang, Chao; Wang, Xu

    2017-12-01

    The used electronic product (UEP) has attracted the worldwide attentions because part of e-waste may be exported from developed countries to developing countries in the name of UEP. On the basis of large foreign trade data of electronic products (e-products), this study adopted the trade data approach (TDA) to quantify the potential exports of UEP in Macau, taking a case study of personal computers (PCs). The results show that the desktop mainframes, LCD monitors, and CRT monitors have more low-unit-value trades with higher trade volumes in the past 10 years, while the laptop and tablet PCs, as the newer technologies, owned the higher ratios of the high-unit-value trades. During the period of 2005-2015, the total mean exports for used laptop and tablet PCs, desktop mainframes, and LCD monitors were approximately 18,592, 79,957, and 43,177 units, respectively, while the possible export volume of used CRT monitors was higher, up to 430,098 units in 2000-2010. Noticed that these potential export volumes could be the lower bound because not all used PCs may be shipped using the PC trade code. For all the four kinds of used PCs, the majority (61.6-98.82%) of the export volumes have gone to Hong Kong, followed by Mainland China and Taiwan. Since 2011, there was no CRT monitor export; however, the other kinds of used PC exports will still exist in Macau in the future. The outcomes are helpful to understand and manage the current export situations of used products in Macau, and can also provide a reference for other countries and regions.

  9. Avoid Disaster: Use Firewalls for Inter-Intranet Security.

    Science.gov (United States)

    Charnetski, J. R.

    1998-01-01

    Discusses the use of firewalls for library intranets, highlighting the move from mainframes to PCs, security issues and firewall architecture, and operating systems. Provides a glossary of basic networking terms and a bibliography of suggested reading. (PEN)

  10. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    OpenAIRE

    Dang Hung; Dinh Tien Tuan Anh; Chang Ee-Chien; Ooi Beng Chin

    2017-01-01

    We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation effi...

  11. Extended data acquisition support at GSI

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  12. Ubiquitous Network Society

    Directory of Open Access Journals (Sweden)

    Cristian USCATU

    2006-01-01

    Full Text Available Technology is evolving faster than ever in the ITC domain. Computing devices become smaller and more powerful by the day (and cheaper than ever. They have started to move away from the classical “computer” towards portable devices like personal digital assistants (PDAs and mobile phones. Even these devices are no longer what they used to be. A phone is no longer a simple voice communication device, but a minicomputer with lots of functions. The addition of wireless communication protocols, like WiFi and Bluetooth, leads to a web of interconnected devices with the final purpose of enabling us to access desired services anywhere, at any time. Adding less complicated devices, as sensors and detectors, located everywhere (clothes, cars, furniture, home appliances etc. but connected to the same global network, we have a technological world aware of itself and aware of us, ready to serve our needs without hindering our lives. “Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.” [Weiser, 1995

  13. NASA Lewis steady-state heat pipe code users manual

    International Nuclear Information System (INIS)

    Tower, L.K.

    1992-06-01

    The NASA Lewis heat pipe code has been developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or, with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which the monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user

  14. PSAPACK 4.2. A code for probabilistic safety assessment level 1. User's manual

    International Nuclear Information System (INIS)

    1995-01-01

    Only limited use has been made until now of the large amount of information contained in probabilistic safety assessments (PSAs). This is mainly due to the complexity of the PSA reports and the difficulties in obtaining intermediate results and in performing updates and recalculations. Moreover, PSA software was developed for mainframe computers, and the files of information such as fault trees and accident sequences were intended for the use of the analysts carrying out PSA studies or other skilled PSA practitioners. The increasing power and availability of personal computers (PCs) and developments in recent years in both hardware and software have made it possible to develop PSA software for use in PCs. Furthermore, the operational characteristics of PCs make them attractive not only for performing PSAs but also for updating the results and in using them in day-to-day applications. The IAEA has therefore developed in co-operation with its Member States, a software package (PSAPACK) for PCs for use in performing a Level 1 PSA and for easy interrogation of the results. Figs

  15. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a ''standard model''. The ''standard model'' consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the ''standard model'' to determine if the requirements of ''non-standard'' architectures can be met. Several possible extensions to the ''standard model'' are suggested including software as well as the hardware architectural feature

  16. PSAPACK 4.2. A code for probabilistic safety assessment level 1. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-03-01

    Only limited use has been made until now of the large amount of information contained in probabilistic safety assessments (PSAs). This is mainly due to the complexity of the PSA reports and the difficulties in obtaining intermediate results and in performing updates and recalculations. Moreover, PSA software was developed for mainframe computers, and the files of information such as fault trees and accident sequences were intended for the use of the analysts carrying out PSA studies or other skilled PSA practitioners. The increasing power and availability of personal computers (PCs) and developments in recent years in both hardware and software have made it possible to develop PSA software for use in PCs. Furthermore, the operational characteristics of PCs make them attractive not only for performing PSAs but also for updating the results and in using them in day-to-day applications. The IAEA has therefore developed in co-operation with its Member States, a software package (PSAPACK) for PCs for use in performing a Level 1 PSA and for easy interrogation of the results. Figs.

  17. Characteristics of spent nuclear fuel

    International Nuclear Information System (INIS)

    Notz, K.J.

    1988-04-01

    The Office of Civilian Radioactive Waste Management (OCRWM) is responsible for the spent fuels and other wastes that will, or may, eventually be disposed of in a geological repository. The two major sources of these materials are commercial light-water reactor (LWR) spent fuel and immobilized high-level waste (HLW). Other wastes that may require long-term isolation include non-LWR spent fuels and miscellaneous sources such as activated metals. This report deals with spent fuels, but for completeness, the other sources are described briefly. Detailed characterizations are required for all of these potential repository wastes. These characteristics include physical, chemical, and radiological properties. The latter must take into account decay as a function of time. In addition, the present inventories and projected quantities of the various wastes are needed. This information has been assembled in a Characteristics Data Base which provides data in four formats: hard copy standard reports, menu-driven personal computer (PC) data bases, program-level PC data bases, and mainframe computer files. 5 refs., 3 figs., 4 tabs

  18. Microcomputer generated pipe support calculations

    International Nuclear Information System (INIS)

    Hankinson, R.F.; Czarnowski, P.; Roemer, R.E.

    1991-01-01

    The cost and complexity of pipe support design has been a continuing challenge to the construction and modification of commercial nuclear facilities. Typically, pipe support design or qualification projects have required large numbers of engineers centrally located with access to mainframe computer facilities. Much engineering time has been spent repetitively performing a sequence of tasks to address complex design criteria and consolidating the results of calculations into documentation packages in accordance with strict quality requirements. The continuing challenges of cost and quality, the need for support engineering services at operating plant sites, and the substantial recent advances in microcomputer systems suggested that a stand-alone microcomputer pipe support calculation generator was feasible and had become a necessity for providing cost-effective and high quality pipe support engineering services to the industry. This paper outlines the preparation for, and the development of, an integrated pipe support design/evaluation software system which maintains all computer programs in the same environment, minimizes manual performance of standard or repetitive tasks, and generates a high quality calculation which is consistent and easily followed

  19. Maxdose-SR and popdose-SR routine release atmospheric dose models used at SRS

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Trimor, P. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-07-28

    MAXDOSE-SR and POPDOSE-SR are used to calculate dose to the offsite Reference Person and to the surrounding Savannah River Site (SRS) population respectively following routine releases of atmospheric radioactivity. These models are currently accessed through the Dose Model Version 2014 graphical user interface (GUI). MAXDOSE-SR and POPDOSE-SR are personal computer (PC) versions of MAXIGASP and POPGASP, which both resided on the SRS IBM Mainframe. These two codes follow U.S. Nuclear Regulatory Commission (USNRC) Regulatory Guides 1.109 and 1.111 (1977a, 1977b). The basis for MAXDOSE-SR and POPDOSE-SR are USNRC developed codes XOQDOQ (Sagendorf et. al 1982) and GASPAR (Eckerman et. al 1980). Both of these codes have previously been verified for use at SRS (Simpkins 1999 and 2000). The revisions incorporated into MAXDOSE-SR and POPDOSE-SR Version 2014 (hereafter referred to as MAXDOSE-SR and POPDOSE-SR unless otherwise noted) were made per Computer Program Modification Tracker (CPMT) number Q-CMT-A-00016 (Appendix D). Version 2014 was verified for use at SRS in Dixon (2014).

  20. Computable Frames in Computable Banach Spaces

    Directory of Open Access Journals (Sweden)

    S.K. Kaushik

    2016-06-01

    Full Text Available We develop some parts of the frame theory in Banach spaces from the point of view of Computable Analysis. We define computable M-basis and use it to construct a computable Banach space of scalar valued sequences. Computable Xd frames and computable Banach frames are also defined and computable versions of sufficient conditions for their existence are obtained.

  1. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  2. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  3. Cooperative processing data bases

    Science.gov (United States)

    Hasta, Juzar

    1991-01-01

    Cooperative processing for the 1990's using client-server technology is addressed. The main theme is concepts of downsizing from mainframes and minicomputers to workstations on a local area network (LAN). This document is presented in view graph form.

  4. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  5. Paper-Based and Computer-Based Concept Mappings: The Effects on Computer Achievement, Computer Anxiety and Computer Attitude

    Science.gov (United States)

    Erdogan, Yavuz

    2009-01-01

    The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…

  6. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  7. Market research for Idaho Transportation Department linear referencing system.

    Science.gov (United States)

    2009-09-02

    For over 30 years, the Idaho Transportation Department (ITD) has had an LRS called MACS : (MilePoint And Coded Segment), which is being implemented on a mainframe using a : COBOL/CICS platform. As ITD began embracing newer technologies and moving tow...

  8. Pioneering system made information affordable | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-10-27

    Oct 27, 2010 ... Developed in the 1970s by IDRC, a pioneering software tool is still helping organizations in ... Website for the MINISIS software suite of products ... powerful enough to handle ISIS, at about a third of the cost of mainframes.

  9. Normalizing the causality between time series

    Science.gov (United States)

    Liang, X. San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  10. Nuclear plant analyzer development at INEL

    International Nuclear Information System (INIS)

    Laats, E.T.; Russell, K.D.; Stewart, H.D.

    1983-01-01

    The Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission (NRC) has sponsored development of a software-hardware system called the Nuclear Plant Analyzer (NPA). This paper describes the status of the NPA project at the INEL after one year of development. When completed, the NPA will be an integrated network of analytical tools for performing reactor plant analyses. Development of the NPA in FY-1983 progressed along two parallel pathways; namely, conceptual planning and software development. Regarding NPA planning, and extensive effort was conducted to define the function requirements of the NPA, conceptual design, and hardware needs. Regarding software development conducted in FY-1983, all development was aimed toward demonstrating the basic concept and feasibility of the NPA. Nearly all software was developed and resides on the INEL twin Control Data Corporation 176 mainframe computers

  11. The third level trigger and output event unit of the UA1 data-acquisition system

    International Nuclear Information System (INIS)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J.P.; Sphicas, P.

    1989-01-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The usder interface to this system is based on a series of Macintosh personal computers connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an evaluation of its performance are presented. (orig.)

  12. The third level trigger and output event unit of the UA1 data-acquisition system

    Science.gov (United States)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J. P.; Sphicas, P.

    1989-12-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.

  13. STOMP, Subsurface Transport Over Multiple Phases, theory guide

    International Nuclear Information System (INIS)

    White, M.D.; Oostrom, M.

    1996-10-01

    This guide describes the simulator's governing equations, constitutive functions and numerical solution algorithms of the STOMP (Subsurface Transport Over Multiple Phases) simulator, a scientific tool for analyzing multiple phase subsurface flow and transport. The STOMP simulator's fundamental purpose is to produce numerical predictions of thermal and hydrologic flow and transport phenomena in variably saturated subsurface environments, which are contaminated with volatile or nonvolatile organic compounds. Auxiliary applications include numerical predictions of solute transport processes including radioactive chain decay processes. In writing these guides for the STOMP simulator, the authors have assumed that the reader comprehends concepts and theories associated with multiple-phase hydrology, heat transfer, thermodynamics, radioactive chain decay, and nonhysteretic relative permeability, saturation-capillary pressure constitutive functions. The authors further assume that the reader is familiar with the computing environment on which they plan to compile and execute the STOMP simulator. The STOMP simulator requires an ANSI FORTRAN 77 compiler to generate an executable code. The memory requirements for executing the simulator are dependent on the complexity of physical system to be modeled and the size and dimensionality of the computational domain. Likewise execution speed depends on the problem complexity, size and dimensionality of the computational domain, and computer performance. One-dimensional problems of moderate complexity can be solved on conventional desktop computers, but multidimensional problems involving complex flow and transport phenomena typically require the power and memory capabilities of workstation or mainframe type computer systems

  14. Neural computation and the computational theory of cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-04-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. Copyright © 2012 Cognitive Science Society, Inc.

  15. Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.

    Science.gov (United States)

    Gallenberger, John; Batterton, John

    1989-01-01

    Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…

  16. Neural Computation and the Computational Theory of Cognition

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-01-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism--neural processes are computations in the…

  17. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    Science.gov (United States)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  18. Computer-aided design and computer science technology

    Science.gov (United States)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  19. The Evolution of Software in High Energy Physics

    International Nuclear Information System (INIS)

    Brun, René

    2012-01-01

    The paper reviews the evolution of the software in High Energy Physics from the time of expensive mainframes to grids and clouds systems using thousands of multi-core processors. It focuses on the key parameters or events that have shaped the current software infrastructure.

  20. Rights management technologies: A good choice for securing electronic healthrecords?

    NARCIS (Netherlands)

    Petkovic, M.; Katzenbeisser, S.; Kursawe, K.; Pohlmann, N.; Reimer, H.; Schneider, W.

    2007-01-01

    Advances in healthcare IT bring new concerns with respect to privacy and security. Security critical patient data no longer resides on mainframes physically isolated within an organization, where physical security measures can be taken to defend the data and the system. Modern solutions are heading

  1. 76 FR 6839 - ActiveCore Technologies, Inc., Battery Technologies, Inc., China Media1 Corp., Dura Products...

    Science.gov (United States)

    2011-02-08

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] ActiveCore Technologies, Inc., Battery Technologies, Inc., China Media1 Corp., Dura Products International, Inc. (n/k/a Dexx Corp.), Global Mainframe Corp., GrandeTel Technologies, Inc., Magna Entertainment Corp. (n/k/a Reorganized Magna Entertainment...

  2. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  3. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  4. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  5. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  6. Curriculum Development through YTS Modular Credit Accumulation.

    Science.gov (United States)

    Further Education Unit, London (England).

    This document reports the evaluation of the collaborately developed Modular Training Framework (MainFrame), a British curriculum development project, built around a commitment to a competency-based, modular credit accumulation program. The collaborators were three local education authorities (LEAs), those of Bedfordshire, Haringey, and Sheffield,…

  7. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  8. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  9. Conversion of the COBRA-IV-I code from CDC CYBER to HP 9000/700 version

    International Nuclear Information System (INIS)

    Sohn, D. S.; Yoo, Y. J.; Nahm, K. Y.; Hwang, D. H.

    1996-01-01

    COBRA-IV-I is a multichannel analysis code for the thermal-hydraulic analysis of rod bundle nuclear fuel elements and cores based on the subchannel approach. The existing COBRA-IV-I code is the control data corporation (CDC) CYBER version, which has limitations on the computer core storage and gives some inconvenience to the user interface. To solve these problems, we have converted the COBRA-IV-I code form the CDC CYBER mainframe to an Hewlett Packard (HP) 9000/700-series workstation version, and have verified the converted code. as a result, we have found almost no difference between the two versions in their calculation results. Therefore we expect the HP 9000/700 version of the COBRA-IV-I code to be the basis for the future development of an improved multichannel analysis code under the more convenient user environment. (author). 3 tabs., 2 figs., 8 refs

  10. WAM-E user's manual

    International Nuclear Information System (INIS)

    Rayes, L.G.; Riley, J.E.

    1986-07-01

    The WAM-E series of mainframe computer codes have been developed to efficiently analyze the large binary models (e.g., fault trees) used to represent the logic relationships within and between the systems of a nuclear power plant or other large, multisystem entity. These codes have found wide application in reliability and safety studies of nuclear power plant systems. There are now nine codes in the WAM-E series, with six (WAMBAM/WAMTAP, WAMCUT, WAMCUT-II, WAMFM, WAMMRG, and SPASM) classified as Type A Production codes and the other three (WAMFTP, WAMTOP, and WAMCONV) classified as Research codes. This document serves as a combined User's Guide, Programmer's Manual, and Theory Reference for the codes, with emphasis on the Production codes. To that end, the manual is divided into four parts: Part I, Introduction; Part II, Theory and Numerics; Part III, WAM-E User's Guide; and Part IV, WAMMRG Programmer's Manual

  11. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  12. Performance awareness execution performance of HEP codes on RISC platforms,issues and solutions

    CERN Document Server

    Yaari, R; Yaari, Refael; Jarp, Sverre

    1995-01-01

    The work described in this paper was started during the migration of Aleph's production jobs from the IBM mainframe/CRAY supercomputer to several RISC/Unix workstation platforms. The aim was to understand why Aleph did not obtain the performance on the RISC platforms that was "promised" after a CERN Unit comparison between these RISC platforms and the IBM mainframe. Remedies were also sought. Since the work with the Aleph jobs in turn led to the related task of understanding compilers and their options, the conditions under which the CERN benchmarks (and other benchmarks) were run, kernel routines and frequently used CERNLIB routines, the whole undertaking expanded to try to look at all the factors that influence the performance of High Energy Physics (HEP) jobs in general. Finally, key performance issues were reviewed against the programs of one of the LHC collaborations (Atlas) with the hope that the conclusions would be of long- term interest during the establishment of their simulation, reconstruction and...

  13. Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

    Directory of Open Access Journals (Sweden)

    Dang Hung

    2017-07-01

    Full Text Available We consider privacy-preserving computation of big data using trusted computing primitives with limited private memory. Simply ensuring that the data remains encrypted outside the trusted computing environment is insufficient to preserve data privacy, for data movement observed during computation could leak information. While it is possible to thwart such leakage using generic solution such as ORAM [42], designing efficient privacy-preserving algorithms is challenging. Besides computation efficiency, it is critical to keep trusted code bases lean, for large ones are unwieldy to vet and verify. In this paper, we advocate a simple approach wherein many basic algorithms (e.g., sorting can be made privacy-preserving by adding a step that securely scrambles the data before feeding it to the original algorithms. We call this approach Scramble-then-Compute (StC, and give a sufficient condition whereby existing external memory algorithms can be made privacy-preserving via StC. This approach facilitates code-reuse, and its simplicity contributes to a smaller trusted code base. It is also general, allowing algorithm designers to leverage an extensive body of known efficient algorithms for better performance. Our experiments show that StC could offer up to 4.1× speedups over known, application-specific alternatives.

  14. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  15. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  16. MINIZOO in de Benelux : Structure and use of a database of skin irritating organisms

    NARCIS (Netherlands)

    Bronswijk, van J.E.M.H.; Reichl, E.R.

    1986-01-01

    MI NIZOO database is structured within the standard software package SIRv2 (= Scientific Information Retrieval version 2). This flexible program is installed on the university mainframe (a CYBER 180). The program dBASE II employed on a microcomputer (MICROSOL), can be used for part of data entry and

  17. Heterotic computing: exploiting hybrid computational devices.

    Science.gov (United States)

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Mathematical and numerical models to achieve high speed with special-purpose parallel processors

    International Nuclear Information System (INIS)

    Cheng, H.S.; Wulff, W.; Mallen, A.N.

    1986-01-01

    Historically, safety analyses and plant dynamic simulations have been and still are being carried out be means of detailed FORTRAN codes on expensive mainframe computers in time-consuming batch processing mode. These codes have grown to be so expensive to execute that their utilization depends increasingly on the availability of very expensive supercomputers. Thus, advanced technology for high-speed, low-cost, and accurate plant dynamic simulations is very much needed. Ideally, a low-cost facility based on a modern minicomputer can be dedicated to the staff of a power plant, which is easy and convenient to use, and which can simulate realistically plant transients at faster than real-time speeds. Such a simulation capability can enhance safety and plant utilization. One such simulation facility that has been developed is the Brookhaven National Laboratory (BNL) Plant Analyzer, currently set up for boiling water reactor plant simulations at up to seven times faster than real-time process speeds. The principal hardware components of the BNL Plant Analyzer are two units of special-purpose parallel processors, the AD10 of Applied Dynamics International and a PDP-11/34 host computer

  19. Lattice gauge calculation in particle theory

    International Nuclear Information System (INIS)

    Barkai, D.; Moriarty, K.J.M.; Rebbi, C.; Brookhaven National Lab., Upton, NY

    1985-01-01

    There are many problems in particle physics which cannot be treated analytically, but are amenable to numcerical solution using today's most powerful computers. Prominent among such problems are those encountered in the theory of strong interactions, where the resolution of fundamental issues such as demonstrating quark confinement or evaluating hadronic structure is rooted in a successful description of the behaviour of a very large number of dynamical variables in non-linear interaction. This paper briefly outlines the mathematical problems met in the formulation of the quantum field theory for strong interactions, the motivation for numerical methods of resolution and the algorithms which are currently being used. Such algorithms require very large amounts of memory and computation and, because of their organized structure, are ideally suited for implementation on mainframes with vectorized architecture. While the details of the actual implementation will be coverd in other contributions to this conference, this paper will present an account of the most important physics results obtained up to now and will conclude with a survey of open problems in particle theory which could be solved numerically in the near future. (orig.)

  20. Lattice gauge calculation in particle theory

    International Nuclear Information System (INIS)

    Barkai, D.; Moriarity, K.J.M.; Rebbi, C.

    1985-01-01

    There are many problems in particle physics which cannot be treated analytically, but are amenable to numerical solution using today's most powerful computers. Prominent among such problems are those encountered in the theory of strong interactions, where the resolution of fundamental issues such as demonstrating quark confinement or evaluating hadronic structure is rooted in a successful description of the behavior of a very large number of dynamical variables in non-linear interaction. This paper briefly outlines the mathematical problems met in the formulation of the quantum field theory for strong interactions, the motivation for numerical methods of resolution and the algorithms which are currently being used. Such algorithms require very large amounts of memory and computation and, because of their organized structure, are ideally suited for implementation on mainframes with vectorized architecture. While the details of the actual implementation will be covered in other contributions to this conference, this paper will present an account of the most important physics results obtained up to now and will conclude with a survey of open problems in particle theory which could be solved numerically in the near future

  1. Lattice gauge calculation in particle theory

    Energy Technology Data Exchange (ETDEWEB)

    Barkai, D [Control Data Corp., Fort Collins, CO (USA); Moriarty, K J.M. [Dalhousie Univ., Halifax, Nova Scotia (Canada). Inst. for Computational Studies; Rebbi, C [European Organization for Nuclear Research, Geneva (Switzerland); Brookhaven National Lab., Upton, NY (USA). Physics Dept.)

    1985-05-01

    There are many problems in particle physics which cannot be treated analytically, but are amenable to numcerical solution using today's most powerful computers. Prominent among such problems are those encountered in the theory of strong interactions, where the resolution of fundamental issues such as demonstrating quark confinement or evaluating hadronic structure is rooted in a successful description of the behaviour of a very large number of dynamical variables in non-linear interaction. This paper briefly outlines the mathematical problems met in the formulation of the quantum field theory for strong interactions, the motivation for numerical methods of resolution and the algorithms which are currently being used. Such algorithms require very large amounts of memory and computation and, because of their organized structure, are ideally suited for implementation on mainframes with vectorized architecture. While the details of the actual implementation will be coverd in other contributions to this conference, this paper will present an account of the most important physics results obtained up to now and will conclude with a survey of open problems in particle theory which could be solved numerically in the near future.

  2. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  3. Characteristics of potential repository wastes

    International Nuclear Information System (INIS)

    Notz, K.J.

    1989-01-01

    The Office of Civilian Radioactive Waste Management (OCRWM) is responsible for the spent fuels and other wastes that will be disposed of in a geologic repository. The two major sources of these materials are commercial light-water reactor (LWR) spent fuel and immobilized high-level waste (HLW). Other wastes that may require long-term isolation include non-LWR spent fuels and miscellaneous sources such as activated metals. Detailed characterizations are required for all of these potential repository wastes. These characterizations include physical, chemical, and radiological properties. The latter must take into account decay as a function of time. This information has been extracted from primary data sources, evaluated, and assembled in a Characteristics Data Base which provides data in four formats: hard copy standard reports, menu-driven personal computer (PC) data bases, program-level PC data bases, and mainframe computer files. The Characteristics Data Base provides a standard set of self-consistent data to the various areas of responsibility including systems integration and waste stream analysis, storage, transportation, and geologic disposal. The data will be used for design studies, evaluation of alternatives, and system optimization by OCRWM and supporting contractors. 7 refs., 5 figs., 7 tabs

  4. System engineering workstations - critical tool in addressing waste storage, transportation, or disposal

    International Nuclear Information System (INIS)

    Mar, B.W.

    1987-01-01

    The ability to create, evaluate, operate, and manage waste storage, transportation, and disposal systems (WSTDSs) is greatly enhanced when automated tools are available to support the generation of the voluminous mass of documents and data associated with the system engineering of the program. A system engineering workstation is an optimized set of hardware and software that provides such automated tools to those performing system engineering functions. This paper explores the functions that need to be performed by a WSTDS system engineering workstation. While the latter stages of a major WSTDS may require a mainframe computer and specialized software systems, most of the required system engineering functions can be supported by a system engineering workstation consisting of a personnel computer and commercial software. These findings suggest system engineering workstations for WSTDS applications will cost less than $5000 per unit, and the payback on the investment can be realized in a few months. In most cases the major cost element is not the capital costs of hardware or software, but the cost to train or retrain the system engineers in the use of the workstation and to ensure that the system engineering functions are properly conducted

  5. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a open-quotes standard modelclose quotes. The open-quotes standard modelclose quotes consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the open-quotes standard modelclose quotes to determine if the requirements of open-quotes non-standardclose quotes architectures can be met. Several possible extensions to the open-quotes standard modelclose quotes are suggested including software as well as the hardware architectural features

  6. An overview of the NASA electronic components information management system

    Science.gov (United States)

    Kramer, G.; Waterbury, S.

    1991-01-01

    The NASA Parts Project Office (NPPO) comprehensive data system to support all NASA Electric, Electronic, and Electromechanical (EEE) parts management and technical data requirements is described. A phase delivery approach is adopted, comprising four principal phases. Phases 1 and 2 support Space Station Freedom (SSF) and use a centralized architecture with all data and processing kept on a mainframe computer. Phases 3 and 4 support all NASA centers and projects and implement a distributed system architecture, in which data and processing are shared among networked database servers. The Phase 1 system, which became operational in February of 1990, implements a core set of functions. Phase 2, scheduled for release in 1991, adds functions to the Phase 1 system. Phase 3, to be prototyped beginning in 1991 and delivered in 1992, introduces a distributed system, separate from the Phase 1 and 2 system, with a refined semantic data model. Phase 4 extends the data model and functionality of the Phase 3 system to provide support for the NASA design community, including integration with Computer Aided Design (CAD) environments. Phase 4 is scheduled for prototyping in 1992 to 93 and delivery in 1994.

  7. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  9. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  10. Soft computing in computer and information science

    CERN Document Server

    Fray, Imed; Pejaś, Jerzy

    2015-01-01

    This book presents a carefully selected and reviewed collection of papers presented during the 19th Advanced Computer Systems conference ACS-2014. The Advanced Computer Systems conference concentrated from its beginning on methods and algorithms of artificial intelligence. Further future brought new areas of interest concerning technical informatics related to soft computing and some more technological aspects of computer science such as multimedia and computer graphics, software engineering, web systems, information security and safety or project management. These topics are represented in the present book under the categories Artificial Intelligence, Design of Information and Multimedia Systems, Information Technology Security and Software Technologies.

  11. Use of the computer program in a cloud computing

    Directory of Open Access Journals (Sweden)

    Radovanović Sanja

    2013-01-01

    Full Text Available Cloud computing represents a specific networking, in which a computer program simulates the operation of one or more server computers. In terms of copyright, all technological processes that take place within the cloud computing are covered by the notion of copying computer programs, and exclusive right of reproduction. However, this right suffers some limitations in order to allow normal use of computer program by users. Based on the fact that the cloud computing is virtualized network, the issue of normal use of the computer program requires to put all aspects of the permitted copying into the context of a specific computing environment and specific processes within the cloud. In this sense, the paper pointed out that the user of a computer program in cloud computing, needs to obtain the consent of the right holder for any act which he undertakes using the program. In other words, the copyright in the cloud computing is a full scale, and thus the freedom of contract (in the case of this particular restriction as well.

  12. Quantum Computing and the Limits of the Efficiently Computable

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.

  13. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  14. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  15. Quantum Computing's Classical Problem, Classical Computing's Quantum Problem

    OpenAIRE

    Van Meter, Rodney

    2013-01-01

    Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...

  16. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  17. (SOXS) Mission Amish B. Shah , NM

    Indian Academy of Sciences (India)

    to estimate correct thresholds for flare detection. Memory check-out and read out modes are usable for on-board diagnosis. 2.4 Data communication. This package provides a common interface to all SOXS packages with spacecraft bus. It minimizes the chance of damage of mainframe bus because of anomaly in packages.

  18. How Organizations Learn: A Communication Framework.

    Science.gov (United States)

    1986-04-01

    Bodensteiner (1970) reported a sharp increase in the frequency of face-to-face and telephone media when organizations experienced stress and uncertainty from...Organizations," in Jarke, M. (ed.), Ma:nagers, Micros, and Mainframes, New York: John Wiley and Sons, 1986. Huthber, (;., O’Connel I, M. and Cuminings , L

  19. Implementation of an FIR Band Pass Filter Using a Bit-Slice Processor.

    Science.gov (United States)

    1987-06-01

    SYSTEM ’ SOFTWARE FIRMWARE HARDWARE Figure 2.1 Instruction Levels CRef. 5] 16 are microprogrammed (firmware) to enable physical control signals to the...Controllers and ALUs, pp. 9, 30-42, 70-71, Garland STPM Press, 1981. 6. Wolfe, C.F., "Bit-slice Processors Come To Mainframe Design," Electronics

  20. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  1. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  2. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  3. Synthetic Computation: Chaos Computing, Logical Stochastic Resonance, and Adaptive Computing

    Science.gov (United States)

    Kia, Behnam; Murali, K.; Jahed Motlagh, Mohammad-Reza; Sinha, Sudeshna; Ditto, William L.

    Nonlinearity and chaos can illustrate numerous behaviors and patterns, and one can select different patterns from this rich library of patterns. In this paper we focus on synthetic computing, a field that engineers and synthesizes nonlinear systems to obtain computation. We explain the importance of nonlinearity, and describe how nonlinear systems can be engineered to perform computation. More specifically, we provide an overview of chaos computing, a field that manually programs chaotic systems to build different types of digital functions. Also we briefly describe logical stochastic resonance (LSR), and then extend the approach of LSR to realize combinational digital logic systems via suitable concatenation of existing logical stochastic resonance blocks. Finally we demonstrate how a chaotic system can be engineered and mated with different machine learning techniques, such as artificial neural networks, random searching, and genetic algorithm, to design different autonomous systems that can adapt and respond to environmental conditions.

  4. Explorations in computing an introduction to computer science

    CERN Document Server

    Conery, John S

    2010-01-01

    Introduction Computation The Limits of Computation Algorithms A Laboratory for Computational ExperimentsThe Ruby WorkbenchIntroducing Ruby and the RubyLabs environment for computational experimentsInteractive Ruby Numbers Variables Methods RubyLabs The Sieve of EratosthenesAn algorithm for finding prime numbersThe Sieve Algorithm The mod Operator Containers Iterators Boolean Values and the delete if Method Exploring the Algorithm The sieve Method A Better Sieve Experiments with the Sieve A Journey of a Thousand MilesIteration as a strategy for solving computational problemsSearching and Sortin

  5. Computer Networking Laboratory for Undergraduate Computer Technology Program

    National Research Council Canada - National Science Library

    Naghedolfeizi, Masoud

    2000-01-01

    ...) To improve the quality of education in the existing courses related to computer networks and data communications as well as other computer science courses such programming languages and computer...

  6. Mathematics, Physics and Computer Sciences The computation of ...

    African Journals Online (AJOL)

    Mathematics, Physics and Computer Sciences The computation of system matrices for biquadraticsquare finite ... Global Journal of Pure and Applied Sciences ... The computation of system matrices for biquadraticsquare finite elements.

  7. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  8. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  9. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  10. High Thermal Conductivity Materials

    CERN Document Server

    Shinde, Subhash L

    2006-01-01

    Thermal management has become a ‘hot’ field in recent years due to a need to obtain high performance levels in many devices used in such diverse areas as space science, mainframe and desktop computers, optoelectronics and even Formula One racing cars! Thermal solutions require not just taking care of very high thermal flux, but also ‘hot spots’, where the flux densities can exceed 200 W/cm2. High thermal conductivity materials play an important role in addressing thermal management issues. This volume provides readers a basic understanding of the thermal conduction mechanisms in these materials and discusses how the thermal conductivity may be related to their crystal structures as well as microstructures developed as a result of their processing history. The techniques for accurate measurement of these properties on large as well as small scales have been reviewed. Detailed information on the thermal conductivity of diverse materials including aluminum nitride (AlN), silicon carbide (SiC), diamond, a...

  11. A perspective on software quality management using microcomputers in safety-related activities

    International Nuclear Information System (INIS)

    Braudt, T.E.; Pratl, M.J.

    1992-01-01

    Software Quality Management, often referred to as Software Quality Assurance or SQA, is a belief or mindset in establishing and protecting the value of software as a corporate asset. It is often expressed in terms of a basic methodology for ensuring adequate controls to maintain the integrity of the configuration of a software system. SQA applies to all activities germane to the acquisition, installation, operation and maintenance of software systems and is key to calculational accuracy and completeness in an Engineering and/or Scientific arena. Simply, it is a vital management tool for ensuring cost-effective utilization of information management resources. The basis principles of SQA apply equally to software applications in microcomputer environments and mainframe environments alike. Regardless of the nature of the computing environment, divisions of responsibilities or logistical difficulties, quality measures must be established to ensure accuracy, completeness, reliability, and reproducibility of the results of the software application. The extent to which these measures are applied should be based upon regulation, economics and practicality

  12. Development and operation of nuclear material accounting system of JAERI

    International Nuclear Information System (INIS)

    Obata, Takashi; Numata, Kazuyoshi; Namiki, Shinji; Yamauchi, Takahiro

    2003-01-01

    For the nuclear material accounting system, the mainframe computer had been used in Japan Atomic Energy Research Institute (JAERI). For the purpose of more flexible use and easy operation, the PC base accounting system has been developed since 1999, and operation started from October, 2002. This system consists of the server with the database software and the client PC with original application software. The functions of this system are the input and edit of data, the creation of inspection correspondence data, and creation of a report to the states. Furthermore, it is also possible to create the Web application which used accounting data on a user level by using the programming language. Now, this system is being specialized in JAERI, but it is during a plan to develop as a system which can be also used at other institutions and organization. In the paper, the outline and operating situation of the nuclear material accounting system of JAERI are presented. (author)

  13. Piping support load data base for nuclear plants

    International Nuclear Information System (INIS)

    Childress, G.G.

    1991-01-01

    Nuclear Station Modifications are continuous through the life of a Nuclear Power Plant. The NSM often impacts an existing piping system and its supports. Prior to implementation of the NSM, the modified piping system is qualified and the qualification documented. This manual review process is tedious and an obvious bottleneck to engineering productivity. Collectively, over 100,000 piping supports exist at Duke Power Company's Nuclear Stations. Engineering support must maintain proper documentation of all data for each support. Duke Power Company has designed and developed a mainframe based system that: directly uses Support Load Summary data generated by a piping analysis computer program; streamlines the pipe support evaluation process; easily retrieves As-Built and NSM information for any pipe support from an NSM or AS-BUILT data base; and generated documentation for easy traceability of data to the information source. This paper discusses the design considerations for development of Support Loads Database System (SLDB) and reviews the program functionality through the user menus

  14. Effects of display resolution and size on primary diagnosis of chest images using a high-resolution electronic work station

    International Nuclear Information System (INIS)

    Fuhrman, C.R.; Cooperstein, L.A.; Herron, J.; Good, W.F.; Good, B.; Gur, D.; Maitz, G.; Tabor, E.; Hoy, R.J.

    1987-01-01

    To evaluate the acceptability of electronically displayed planar images, the authors have a high-resolution work station. This system utilizes a high-resolution film digitizer (100-micro resolution) interfaced to a mainframe computer and two high-resolution (2,048 X 2,048) display devices (Azuray). In a clinically simulated multiobserver blind study (19 cases and five observers) a prodetermined series of reading sessions is stored on magnetic disk and is transferred to the displays while the preceding set of images is being reviewed. Images can be linearly processed on the fly into 2,000 X 2,000 full resolution, 1,000 X 1,000 minified display, or 1,000 X 1,000 interpolated for full-size display. Results of the study indicate that radiologists accept but do not like significant minification (more than X2), and they rate 2,000 X 2,000 images as having better diagnostic quality than 1,000 X 1,000 images

  15. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  16. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  17. Intelligent spatial ecosystem modeling using parallel processors

    International Nuclear Information System (INIS)

    Maxwell, T.; Costanza, R.

    1993-01-01

    Spatial modeling of ecosystems is essential if one's modeling goals include developing a relatively realistic description of past behavior and predictions of the impacts of alternative management policies on future ecosystem behavior. Development of these models has been limited in the past by the large amount of input data required and the difficulty of even large mainframe serial computers in dealing with large spatial arrays. These two limitations have begun to erode with the increasing availability of remote sensing data and GIS systems to manipulate it, and the development of parallel computer systems which allow computation of large, complex, spatial arrays. Although many forms of dynamic spatial modeling are highly amenable to parallel processing, the primary focus in this project is on process-based landscape models. These models simulate spatial structure by first compartmentalizing the landscape into some geometric design and then describing flows within compartments and spatial processes between compartments according to location-specific algorithms. The authors are currently building and running parallel spatial models at the regional scale for the Patuxent River region in Maryland, the Everglades in Florida, and Barataria Basin in Louisiana. The authors are also planning a project to construct a series of spatially explicit linked ecological and economic simulation models aimed at assessing the long-term potential impacts of global climate change

  18. On teaching computer ethics within a computer science department.

    Science.gov (United States)

    Quinn, Michael J

    2006-04-01

    The author has surveyed a quarter of the accredited undergraduate computer science programs in the United States. More than half of these programs offer a 'social and ethical implications of computing' course taught by a computer science faculty member, and there appears to be a trend toward teaching ethics classes within computer science departments. Although the decision to create an 'in house' computer ethics course may sometimes be a pragmatic response to pressure from the accreditation agency, this paper argues that teaching ethics within a computer science department can provide students and faculty members with numerous benefits. The paper lists topics that can be covered in a computer ethics course and offers some practical suggestions for making the course successful.

  19. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  20. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  1. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  2. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  3. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  4. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  5. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  6. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  8. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  9. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  10. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  11. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  12. Natural Computing in Computational Finance Volume 4

    CERN Document Server

    O’Neill, Michael; Maringer, Dietmar

    2012-01-01

    This book follows on from Natural Computing in Computational Finance  Volumes I, II and III.   As in the previous volumes of this series, the  book consists of a series of  chapters each of  which was selected following a rigorous, peer-reviewed, selection process.  The chapters illustrate the application of a range of cutting-edge natural  computing and agent-based methodologies in computational finance and economics.  The applications explored include  option model calibration, financial trend reversal detection, enhanced indexation, algorithmic trading,  corporate payout determination and agent-based modeling of liquidity costs, and trade strategy adaptation.  While describing cutting edge applications, the chapters are  written so that they are accessible to a wide audience. Hence, they should be of interest  to academics, students and practitioners in the fields of computational finance and  economics.  

  13. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  14. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  15. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  16. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  17. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  18. Computer Series, 3: Computer Graphics for Chemical Education.

    Science.gov (United States)

    Soltzberg, Leonard J.

    1979-01-01

    Surveys the current scene in computer graphics from the point of view of a chemistry educator. Discusses the scope of current applications of computer graphics in chemical education, and provides information about hardware and software systems to promote communication with vendors of computer graphics equipment. (HM)

  19. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  20. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  1. Pascal-SC a computer language for scientific computation

    CERN Document Server

    Bohlender, Gerd; von Gudenberg, Jürgen Wolff; Rheinboldt, Werner; Siewiorek, Daniel

    1987-01-01

    Perspectives in Computing, Vol. 17: Pascal-SC: A Computer Language for Scientific Computation focuses on the application of Pascal-SC, a programming language developed as an extension of standard Pascal, in scientific computation. The publication first elaborates on the introduction to Pascal-SC, a review of standard Pascal, and real floating-point arithmetic. Discussions focus on optimal scalar product, standard functions, real expressions, program structure, simple extensions, real floating-point arithmetic, vector and matrix arithmetic, and dynamic arrays. The text then examines functions a

  2. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  3. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  4. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  5. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  6. Smart(er) Research

    DEFF Research Database (Denmark)

    Pries-Heje, Jan

    2016-01-01

    This is an answer and an elaboration to Carsten Sørensens’ “The Curse of the Smart Machine?”. My answer disagrees with the postulate of a mainframe focus within the IS field. Instead I suggest that it is a struggle between old and new science. The answer then agrees with the notion that we need n...

  7. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  8. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  9. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  10. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  12. New computational paradigms changing conceptions of what is computable

    CERN Document Server

    Cooper, SB; Sorbi, Andrea

    2007-01-01

    This superb exposition of a complex subject examines new developments in the theory and practice of computation from a mathematical perspective. It covers topics ranging from classical computability to complexity, from biocomputing to quantum computing.

  13. Computing at Stanford.

    Science.gov (United States)

    Feigenbaum, Edward A.; Nielsen, Norman R.

    1969-01-01

    This article provides a current status report on the computing and computer science activities at Stanford University, focusing on the Computer Science Department, the Stanford Computation Center, the recently established regional computing network, and the Institute for Mathematical Studies in the Social Sciences. Also considered are such topics…

  14. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  15. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  16. Abstract quantum computing machines and quantum computational logics

    Science.gov (United States)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  17. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  18. Illustrated computer tomography

    International Nuclear Information System (INIS)

    Takahashi, S.

    1983-01-01

    This book provides the following information: basic aspects of computed tomography; atlas of computed tomography of the normal adult; clinical application of computed tomography; and radiotherapy planning and computed tomography

  19. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  20. COMPUTER-ASSISTED ACCOUNTING

    Directory of Open Access Journals (Sweden)

    SORIN-CIPRIAN TEIUŞAN

    2009-01-01

    Full Text Available What is computer-assisted accounting? Where is the place and what is the role of the computer in the financial-accounting activity? What is the position and importance of the computer in the accountant’s activity? All these are questions that require scientific research in order to find the answers. The paper approaches the issue of the support granted to the accountant to organize and manage the accounting activity by the computer. Starting from the notions of accounting and computer, the concept of computer-assisted accounting is introduced, it has a general character and it refers to the accounting performed with the help of the computer or using the computer to automate the procedures performed by the person who is doing the accounting activity; this is a concept used to define the computer applications of the accounting activity. The arguments regarding the use of the computer to assist accounting targets the accounting informatization, the automating of the financial-accounting activities and the endowment with modern technology of the contemporary accounting.

  1. Engineering computations at the national magnetic fusion energy computer center

    International Nuclear Information System (INIS)

    Murty, S.

    1983-01-01

    The National Magnetic Fusion Energy Computer Center (NMFECC) was established by the U.S. Department of Energy's Division of Magnetic Fusion Energy (MFE). The NMFECC headquarters is located at Lawrence Livermore National Laboratory. Its purpose is to apply large-scale computational technology and computing techniques to the problems of controlled thermonuclear research. In addition to providing cost effective computing services, the NMFECC also maintains a large collection of computer codes in mathematics, physics, and engineering that is shared by the entire MFE research community. This review provides a broad perspective of the NMFECC, and a list of available codes at the NMFECC for engineering computations is given

  2. Reversible computing fundamentals, quantum computing, and applications

    CERN Document Server

    De Vos, Alexis

    2010-01-01

    Written by one of the few top internationally recognized experts in the field, this book concentrates on those topics that will remain fundamental, such as low power computing, reversible programming languages, and applications in thermodynamics. It describes reversible computing from various points of view: Boolean algebra, group theory, logic circuits, low-power electronics, communication, software, quantum computing. It is this multidisciplinary approach that makes it unique.Backed by numerous examples, this is useful for all levels of the scientific and academic community, from undergr

  3. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  4. Touchable Computing: Computing-Inspired Bio-Detection.

    Science.gov (United States)

    Chen, Yifan; Shi, Shaolong; Yao, Xin; Nakano, Tadashi

    2017-12-01

    We propose a new computing-inspired bio-detection framework called touchable computing (TouchComp). Under the rubric of TouchComp, the best solution is the cancer to be detected, the parameter space is the tissue region at high risk of malignancy, and the agents are the nanorobots loaded with contrast medium molecules for tracking purpose. Subsequently, the cancer detection procedure (CDP) can be interpreted from the computational optimization perspective: a population of externally steerable agents (i.e., nanorobots) locate the optimal solution (i.e., cancer) by moving through the parameter space (i.e., tissue under screening), whose landscape (i.e., a prescribed feature of tissue environment) may be altered by these agents but the location of the best solution remains unchanged. One can then infer the landscape by observing the movement of agents by applying the "seeing-is-sensing" principle. The term "touchable" emphasizes the framework's similarity to controlling by touching the screen with a finger, where the external field for controlling and tracking acts as the finger. Given this analogy, we aim to answer the following profound question: can we look to the fertile field of computational optimization algorithms for solutions to achieve effective cancer detection that are fast, accurate, and robust? Along this line of thought, we consider the classical particle swarm optimization (PSO) as an example and propose the PSO-inspired CDP, which differs from the standard PSO by taking into account realistic in vivo propagation and controlling of nanorobots. Finally, we present comprehensive numerical examples to demonstrate the effectiveness of the PSO-inspired CDP for different blood flow velocity profiles caused by tumor-induced angiogenesis. The proposed TouchComp bio-detection framework may be regarded as one form of natural computing that employs natural materials to compute.

  5. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  6. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  7. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  8. Liberty Icons: Linguistic and Multimodal Notes on the Cultural Roots of Digital Technologies

    Directory of Open Access Journals (Sweden)

    Ilaria Moschini

    2013-12-01

    Full Text Available Since the famous 1984 Apple Television Ad, personal computers and the Internet have become icons of popular culture that embody libertarian values. Indeed, they have been described as the necessary tools for the empowerment of the individual and the realization of a peer-to-peer decentralized democracy. This libertarian representation has become a frame, perceived as universal and celebrated all over the world, on a daily basis, through the creation of user-generated contents. I believe that both personal computers and the Internet are American cultural products not only because of the very peculiar historical blending out of which they originate – a mixture of Cold War industrial research culture, US counterculture and DIY ethos (Turner 2006 – but, mainly, because of the founding concept that they are associated with, i.e. freedom. Adopting a functional linguistic/multimodal perspective, my article will explore the conceptual/semantic mapping of digital discourse through the analysis of a corpus of texts that goes from 1984 Apple Ad to Hillary Clinton’s Internet Freedom Speech in order to show how the current mainframe global discourse on digital technologies is permeated with a concept of freedom that combines the US founding rhetoric of liberty together with cybernetics, gnosticism and psychedelic narrations.

  9. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  10. International Conference on Computer, Communication and Computational Sciences

    CERN Document Server

    Mishra, Krishn; Tiwari, Shailesh; Singh, Vivek

    2017-01-01

    Exchange of information and innovative ideas are necessary to accelerate the development of technology. With advent of technology, intelligent and soft computing techniques came into existence with a wide scope of implementation in engineering sciences. Keeping this ideology in preference, this book includes the insights that reflect the ‘Advances in Computer and Computational Sciences’ from upcoming researchers and leading academicians across the globe. It contains high-quality peer-reviewed papers of ‘International Conference on Computer, Communication and Computational Sciences (ICCCCS 2016), held during 12-13 August, 2016 in Ajmer, India. These papers are arranged in the form of chapters. The content of the book is divided into two volumes that cover variety of topics such as intelligent hardware and software design, advanced communications, power and energy optimization, intelligent techniques used in internet of things, intelligent image processing, advanced software engineering, evolutionary and ...

  11. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  12. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  13. 3rd International Conference on Computational Mathematics and Computational Geometry

    CERN Document Server

    Ravindran, Anton

    2016-01-01

    This volume presents original research contributed to the 3rd Annual International Conference on Computational Mathematics and Computational Geometry (CMCGS 2014), organized and administered by Global Science and Technology Forum (GSTF). Computational Mathematics and Computational Geometry are closely related subjects, but are often studied by separate communities and published in different venues. This volume is unique in its combination of these topics. After the conference, which took place in Singapore, selected contributions chosen for this volume and peer-reviewed. The section on Computational Mathematics contains papers that are concerned with developing new and efficient numerical algorithms for mathematical sciences or scientific computing. They also cover analysis of such algorithms to assess accuracy and reliability. The parts of this project that are related to Computational Geometry aim to develop effective and efficient algorithms for geometrical applications such as representation and computati...

  14. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  15. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  17. Usage of Cloud Computing Simulators and Future Systems For Computational Research

    OpenAIRE

    Lakshminarayanan, Ramkumar; Ramalingam, Rajasekar

    2016-01-01

    Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) are used as a business model for Cloud Computing. Nowadays, the adoption and deployment of Cloud Computing is increasing in various domains, forcing researchers to conduct research in the area of Cloud Computing ...

  18. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    Science.gov (United States)

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of

  19. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  20. Perbandingan Kemampuan Embedded Computer dengan General Purpose Computer untuk Pengolahan Citra

    Directory of Open Access Journals (Sweden)

    Herryawan Pujiharsono

    2017-08-01

    Full Text Available Perkembangan teknologi komputer membuat pengolahan citra saat ini banyak dikembangkan untuk dapat membantu manusia di berbagai bidang pekerjaan. Namun, tidak semua bidang pekerjaan dapat dikembangkan dengan pengolahan citra karena tidak mendukung penggunaan komputer sehingga mendorong pengembangan pengolahan citra dengan mikrokontroler atau mikroprosesor khusus. Perkembangan mikrokontroler dan mikroprosesor memungkinkan pengolahan citra saat ini dapat dikembangkan dengan embedded computer atau single board computer (SBC. Penelitian ini bertujuan untuk menguji kemampuan embedded computer dalam mengolah citra dan membandingkan hasilnya dengan komputer pada umumnya (general purpose computer. Pengujian dilakukan dengan mengukur waktu eksekusi dari empat operasi pengolahan citra yang diberikan pada sepuluh ukuran citra. Hasil yang diperoleh pada penelitian ini menunjukkan bahwa optimasi waktu eksekusi embedded computer lebih baik jika dibandingkan dengan general purpose computer dengan waktu eksekusi rata-rata embedded computer adalah 4-5 kali waktu eksekusi general purpose computer dan ukuran citra maksimal yang tidak membebani CPU terlalu besar untuk embedded computer adalah 256x256 piksel dan untuk general purpose computer adalah 400x300 piksel.

  1. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  2. Quantum analogue computing.

    Science.gov (United States)

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  3. Computers for imagemaking

    CERN Document Server

    Clark, D

    1981-01-01

    Computers for Image-Making tells the computer non-expert all he needs to know about Computer Animation. In the hands of expert computer engineers, computer picture-drawing systems have, since the earliest days of computing, produced interesting and useful images. As a result of major technological developments since then, it no longer requires the expert's skill to draw pictures; anyone can do it, provided they know how to use the appropriate machinery. This collection of specially commissioned articles reflects the diversity of user applications in this expanding field

  4. Quantum computer science

    CERN Document Server

    Lanzagorta, Marco

    2009-01-01

    In this text we present a technical overview of the emerging field of quantum computation along with new research results by the authors. What distinguishes our presentation from that of others is our focus on the relationship between quantum computation and computer science. Specifically, our emphasis is on the computational model of quantum computing rather than on the engineering issues associated with its physical implementation. We adopt this approach for the same reason that a book on computer programming doesn't cover the theory and physical realization of semiconductors. Another distin

  5. Polymorphous computing fabric

    Science.gov (United States)

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  6. Know Your Personal Computer Introduction to Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Know Your Personal Computer Introduction to Computers. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 1 January 1996 pp 48-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  7. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  10. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  11. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  12. Attitudes towards Computer and Computer Self-Efficacy as Predictors of Preservice Mathematics Teachers' Computer Anxiety

    Science.gov (United States)

    Awofala, Adeneye O. A.; Akinoso, Sabainah O.; Fatade, Alfred O.

    2017-01-01

    The study investigated attitudes towards computer and computer self-efficacy as predictors of computer anxiety among 310 preservice mathematics teachers from five higher institutions of learning in Lagos and Ogun States of Nigeria using the quantitative research method within the blueprint of the descriptive survey design. Data collected were…

  13. Quantum computing

    International Nuclear Information System (INIS)

    Steane, Andrew

    1998-01-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  14. Quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Steane, Andrew [Department of Atomic and Laser Physics, University of Oxford, Clarendon Laboratory, Oxford (United Kingdom)

    1998-02-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  15. Seventh Medical Image Computing and Computer Assisted Intervention Conference (MICCAI 2012)

    CERN Document Server

    Miller, Karol; Nielsen, Poul; Computational Biomechanics for Medicine : Models, Algorithms and Implementation

    2013-01-01

    One of the greatest challenges for mechanical engineers is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, biomedical sciences, and medicine. This book is an opportunity for computational biomechanics specialists to present and exchange opinions on the opportunities of applying their techniques to computer-integrated medicine. Computational Biomechanics for Medicine: Models, Algorithms and Implementation collects the papers from the Seventh Computational Biomechanics for Medicine Workshop held in Nice in conjunction with the Medical Image Computing and Computer Assisted Intervention conference. The topics covered include: medical image analysis, image-guided surgery, surgical simulation, surgical intervention planning, disease prognosis and diagnostics, injury mechanism analysis, implant and prostheses design, and medical robotics.

  16. Spatial Computation

    Science.gov (United States)

    2003-12-01

    Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address

  17. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  18. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  19. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  20. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  1. Girls and Computing: Female Participation in Computing in Schools

    Science.gov (United States)

    Zagami, Jason; Boden, Marie; Keane, Therese; Moreton, Bronwyn; Schulz, Karsten

    2015-01-01

    Computer education, with a focus on Computer Science, has become a core subject in the Australian Curriculum and the focus of national innovation initiatives. Equal participation by girls, however, remains unlikely based on their engagement with computing in recent decades. In seeking to understand why this may be the case, a Delphi consensus…

  2. The challenge of a data storage hierarchy

    Science.gov (United States)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  3. Sparx PCA Module

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-25

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  4. MACSSA (Macintosh Safeguards Systems Analyzer)

    International Nuclear Information System (INIS)

    Argentesi, F.; Costantini, L.; Kohl, M.

    1986-01-01

    This paper discusses MACSSA a fully interactive menu-driven software system for accountancy of nuclear safeguards systems written for Apple Macintosh. Plant inventory and inventory change records can be entered interactively or can be downloaded from a mainframe database. Measurement procedures and instrument parameters can be defined. Partial or total statistics on propagated errors is performed and shown in tabular or graphic form

  5. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  6. Computer in radiology

    International Nuclear Information System (INIS)

    Kuesters, H.

    1985-01-01

    With this publication, the author presents the requirements that a user specific software should fulfill to reach an effective practice rationalisation through computer usage and the hardware configuration necessary as basic equipment. This should make it more difficult in the future for sales representatives to sell radiologists unusable computer systems. Furthermore, questions shall be answered that were asked by computer interested radiologists during the system presentation. On the one hand there still exists a prejudice against programmes of standard texts and on the other side undefined fears, that handling a computer is to difficult and that one has to learn a computer language first to be able to work with computers. Finally, it i pointed out, the real competitive advantages can be obtained through computer usage. (orig.) [de

  7. Computability and unsolvability

    CERN Document Server

    Davis, Martin

    1985-01-01

    ""A clearly written, well-presented survey of an intriguing subject."" - Scientific American. Classic text considers general theory of computability, computable functions, operations on computable functions, Turing machines self-applied, unsolvable decision problems, applications of general theory, mathematical logic, Kleene hierarchy, computable functionals, classification of unsolvable decision problems and more.

  8. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  9. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Science.gov (United States)

    2010-06-02

    ... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS... Services (CMS). ACTION: Notice of renewal of an existing computer matching program (CMP) that has an...'' section below for comment period. DATES: Effective Dates: CMS filed a report of the Computer Matching...

  10. Advances in unconventional computing

    CERN Document Server

    2017-01-01

    The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...

  11. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  12. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  13. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  14. Computer assisted radiology

    International Nuclear Information System (INIS)

    Lemke, H.U.; Jaffe, C.C.; Felix, R.

    1993-01-01

    The proceedings of the CAR'93 symposium present the 126 oral papers and the 58 posters contributed to the four Technical Sessions entitled: (1) Image Management, (2) Medical Workstations, (3) Digital Image Generation - DIG, and (4) Application Systems - AS. Topics discussed in Session (1) are: picture archiving and communication systems, teleradiology, hospital information systems and radiological information systems, technology assessment and implications, standards, and data bases. Session (2) deals with computer vision, computer graphics, design and application, man computer interaction. Session (3) goes into the details of the diagnostic examination methods such as digital radiography, MRI, CT, nuclear medicine, ultrasound, digital angiography, and multimodality imaging. Session (4) is devoted to computer-assisted techniques, as there are: computer assisted radiological diagnosis, knowledge based systems, computer assisted radiation therapy and computer assisted surgical planning. (UWA). 266 figs [de

  15. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  16. Pacing a data transfer operation between compute nodes on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  17. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  18. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  19. Computations and interaction

    NARCIS (Netherlands)

    Baeten, J.C.M.; Luttik, S.P.; Tilburg, van P.J.A.; Natarajan, R.; Ojo, A.

    2011-01-01

    We enhance the notion of a computation of the classical theory of computing with the notion of interaction. In this way, we enhance a Turing machine as a model of computation to a Reactive Turing Machine that is an abstract model of a computer as it is used nowadays, always interacting with the user

  20. Symbiotic Cognitive Computing

    OpenAIRE

    Farrell, Robert G.; Lenchner, Jonathan; Kephjart, Jeffrey O.; Webb, Alan M.; Muller, MIchael J.; Erikson, Thomas D.; Melville, David O.; Bellamy, Rachel K.E.; Gruen, Daniel M.; Connell, Jonathan H.; Soroker, Danny; Aaron, Andy; Trewin, Shari M.; Ashoori, Maryam; Ellis, Jason B.

    2016-01-01

    IBM Research is engaged in a research program in symbiotic cognitive computing to investigate how to embed cognitive computing in physical spaces. This article proposes 5 key principles of symbiotic cognitive computing.  We describe how these principles are applied in a particular symbiotic cognitive computing environment and in an illustrative application.  

  1. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  2. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. Theory of computation

    CERN Document Server

    Tourlakis, George

    2012-01-01

    Learn the skills and acquire the intuition to assess the theoretical limitations of computer programming Offering an accessible approach to the topic, Theory of Computation focuses on the metatheory of computing and the theoretical boundaries between what various computational models can do and not do—from the most general model, the URM (Unbounded Register Machines), to the finite automaton. A wealth of programming-like examples and easy-to-follow explanations build the general theory gradually, which guides readers through the modeling and mathematical analysis of computational pheno

  5. Data acquisition system in TPE-1RM15

    International Nuclear Information System (INIS)

    Yagi, Yasuyuki; Yahagi, Eiichi; Hirano, Yoichi; Shimada, Toshio; Hirota, Isao; Maejima, Yoshiki

    1991-01-01

    The data acquisition system for TPE-1RM15 reversed field pinch machine had been developed and has recently been completed. Thd data to be acquired consist of many channels of time series data which come from plasma diagnostics. The newly developed data acquisition system uses CAMAC (Computer Automated Measurement And Control) system as a front end data acquisition system and micro-VAX II for control, file management and analyses. Special computer programs, DAQR/D, have been developed for data acquisition routine. Experimental setting and process controlling items are managed by a parameter database in a shared common region and every task can easily refer to it. The acquired data are stored into a mass storage system (total of 1.3GBytes plus a magnetic tape system) including an optical disk system, which can save storage space and allow quick reference. At present, the CAMAC system has 88 (1MHz sampling) and 64(5kHz sampling) channels corresponding to 1.6 MBytes per shot. The data acquisition system can finish one routine within 5 minutes with 1.6MBytes data depending on the amount of graphic outputs. Hardwares and softwares of the system are specified so that the system can be easily expanded. The computer is connected to the AIST Ethernet and the system can be remotely accessed and the acquired data can be transferred to the mainframes on the network. Details about specifications and performance of the system are given in this report. (author)

  6. ENPEP and the microcomputer version of WASP-III: Overview and recent experience

    International Nuclear Information System (INIS)

    Buehring, W.A.; Wolsko, T.D.

    1987-01-01

    Argonne National Laboratory (ANL) has developed a microcomputer-based energy planning package entitled ENergy and Power Evaluation Program (ENPEP). It consists of seven technical modules, four commercial software packages, and an executive system that conveniently integrates the many options associated with performing energy studies. The seven technical modules and their functions are as follows: MACRO allows the user to specify macroeconomic growth (global or sectoral) that will be the drivers of energy demand. DEMAND projects energy demand based upon the macroeconomic growth information supplied in MACRO. PLANTDATA provides a library of technical data on electric generating plants that is used by BALANCE and ELECTRIC. BALANCE computes marketplace energy supply/demand balances over the study period. LOAD computes detailed electric load forecast information for use in ELECTRIC. ELECTRIC, the microcomputer (PC) version of WASP-III, calculates a minimum cost electric supply system to meet electric demand and reliability goals. IMPACTS calculates environmental impacts and resource requirements associated with energy supply system options. ENPEP provides the potential for energy planners in developing countries to carry out important studies without access to inconvenient and/or expensive mainframe computers. The ELECTRIC module of ENPEP provides electric system planners the opportunity to use the WASP-III model for expansion planning of electrical generating systems. Extensive efforts have been made in converting WASP-III to the microcomputer to provide user-friendly data entry forms and options for operations. (author). 3 refs, 20 figs

  7. Commercial space development needs cheap launchers

    Science.gov (United States)

    Benson, James William

    1998-01-01

    SpaceDev is in the market for a deep space launch, and we are not going to pay $50 million for it. There is an ongoing debate about the elasticity of demand related to launch costs. On the one hand there are the ``big iron'' NASA and DoD contractors who say that there is no market for small or inexpensive launchers, that lowering launch costs will not result in significantly more launches, and that the current uncompetitive pricing scheme is appropriate. On the other hand are commercial companies which compete in the real world, and who say that there would be innumerable new launches if prices were to drop dramatically. I participated directly in the microcomputer revolution, and saw first hand what happened to the big iron computer companies who failed to see or heed the handwriting on the wall. We are at the same stage in the space access revolution that personal computers were in the late '70s and early '80s. The global economy is about to be changed in ways that are just as unpredictable as those changes wrought after the introduction of the personal computer. Companies which fail to innovate and keep producing only big iron will suffer the same fate as IBM and all the now-extinct mainframe and minicomputer companies. A few will remain, but with a small share of the market, never again to be in a position to dominate.

  8. VLSI systems energy management from a software perspective – A literature survey

    Directory of Open Access Journals (Sweden)

    Prasada Kumari K.S.

    2016-09-01

    Full Text Available The increasing demand for ultra-low power electronic systems has motivated research in device technology and hardware design techniques. Experimental studies have proved that the hardware innovations for power reduction are fully exploited only with the proper design of upper layer software. Also, the software power and energy modelling and analysis – the first step towards energy reduction is complex due to the inter and intra dependencies of processors, operating systems, application software, programming languages and compilers. The subject is too vast; the paper aims to give a consolidated view to researchers in arriving at solutions to power optimization problems from a software perspective. The review emphasizes the fact that software design and implementation is to be viewed from system energy conservation angle rather than as an isolated process. After covering a global view of end to end software based power reduction techniques for micro sensor nodes to High Performance Computing systems, specific design aspects related to battery powered Embedded computing for mobile and portable systems are addressed in detail. The findings are consolidated into 2 major categories – those related to research directions and those related to existing industry practices. The emerging concept of Green Software with specific focus on mainframe computing is also discussed in brief. Empirical results on power saving are included wherever available. The paper concludes that only with the close co-ordination between hardware architect, software architect and system architect low energy systems can be realized.

  9. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  10. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    Science.gov (United States)

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  11. Elementary EFL Teachers' Computer Phobia and Computer Self-Efficacy in Taiwan

    Science.gov (United States)

    Chen, Kate Tzuching

    2012-01-01

    The advent and application of computer and information technology has increased the overall success of EFL teaching; however, such success is hard to assess, and teachers prone to computer avoidance face negative consequences. Two major obstacles are high computer phobia and low computer self-efficacy. However, little research has been carried out…

  12. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  13. Computers and Computation. Readings from Scientific American.

    Science.gov (United States)

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  14. Review of quantum computation

    International Nuclear Information System (INIS)

    Lloyd, S.

    1992-01-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics

  15. Computer Security Handbook

    CERN Document Server

    Bosworth, Seymour; Whyne, Eric

    2012-01-01

    The classic and authoritative reference in the field of computer security, now completely updated and revised With the continued presence of large-scale computers; the proliferation of desktop, laptop, and handheld computers; and the vast international networks that interconnect them, the nature and extent of threats to computer security have grown enormously. Now in its fifth edition, Computer Security Handbook continues to provide authoritative guidance to identify and to eliminate these threats where possible, as well as to lessen any losses attributable to them. With seventy-seven chapter

  16. Secure cloud computing

    CERN Document Server

    Jajodia, Sushil; Samarati, Pierangela; Singhal, Anoop; Swarup, Vipin; Wang, Cliff

    2014-01-01

    This book presents a range of cloud computing security challenges and promising solution paths. The first two chapters focus on practical considerations of cloud computing. In Chapter 1, Chandramouli, Iorga, and Chokani describe the evolution of cloud computing and the current state of practice, followed by the challenges of cryptographic key management in the cloud. In Chapter 2, Chen and Sion present a dollar cost model of cloud computing and explore the economic viability of cloud computing with and without security mechanisms involving cryptographic mechanisms. The next two chapters addres

  17. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2005-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

  18. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  19. Computability theory

    CERN Document Server

    Weber, Rebecca

    2012-01-01

    What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...

  20. Cartoon computation: quantum-like computing without quantum mechanics

    International Nuclear Information System (INIS)

    Aerts, Diederik; Czachor, Marek

    2007-01-01

    We present a computational framework based on geometric structures. No quantum mechanics is involved, and yet the algorithms perform tasks analogous to quantum computation. Tensor products and entangled states are not needed-they are replaced by sets of basic shapes. To test the formalism we solve in geometric terms the Deutsch-Jozsa problem, historically the first example that demonstrated the potential power of quantum computation. Each step of the algorithm has a clear geometric interpretation and allows for a cartoon representation. (fast track communication)

  1. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  2. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  3. Bioinspired computation in combinatorial optimization: algorithms and their computational complexity

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2012-01-01

    Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials...... problems. Classical single objective optimization is examined first. They then investigate the computational complexity of bioinspired computation applied to multiobjective variants of the considered combinatorial optimization problems, and in particular they show how multiobjective optimization can help...... to speed up bioinspired computation for single-objective optimization problems. The tutorial is based on a book written by the authors with the same title. Further information about the book can be found at www.bioinspiredcomputation.com....

  4. Computation as Medium

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance

    2017-01-01

    Artists increasingly utilize computational tools to generate art works. Computational approaches to art making open up new ways of thinking about agency in interactive art because they invite participation and allow for unpredictable outcomes. Computational art is closely linked...... to the participatory turn in visual art, wherein spectators physically participate in visual art works. Unlike purely physical methods of interaction, computer assisted interactivity affords artists and spectators more nuanced control of artistic outcomes. Interactive art brings together human bodies, computer code......, and nonliving objects to create emergent art works. Computation is more than just a tool for artists, it is a medium for investigating new aesthetic possibilities for choreography and composition. We illustrate this potential through two artistic projects: an improvisational dance performance between a human...

  5. Community Cloud Computing

    Science.gov (United States)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  6. Calgary's new CIS provides flexibility to offer outstanding service in a competitive electricity market

    International Nuclear Information System (INIS)

    Cole, M.

    1999-01-01

    The City of Calgary, driven by the specter of Year 2000 issues as well as deregulation, decided to replace its 18-year old mainframe with state-of-the-art customer information system for all its utility customers. The system selected is the SCT Banner Customer Information System from Enlogix, headquartered in Toronto. This is the only software developed that was actually in use by other utilities, including Westcoast Energy, Toronto Hydro, Edison Source and PG and E in California. The conversion from an S-390 mainframe system relies on Banner's Oracle database technology. The main server, an IBM-RS 6000, will be stationed at an Enlogix data center, located in Calgary. Desktop workstations using Pentium II processors, with 350 MHz will be used. The major factors in favor of the Banner System were the SCT and Enlogix proven implementation methodologies. The implementation, expected to take about a year, will change about 100 major policies, but the utility is going forward with only nine modifications to match its processes to industry standards. The system has been 'Canadianized' to fulfill unique measurements, regulatory and tax requirements

  7. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  8. Nurses' computer literacy and attitudes towards the use of computers in health care.

    Science.gov (United States)

    Gürdaş Topkaya, Sati; Kaya, Nurten

    2015-05-01

    This descriptive and cross-sectional study was designed to address nurses' computer literacy and attitudes towards the use of computers in health care and to determine the correlation between these two variables. This study was conducted with the participation of 688 nurses who worked at two university-affiliated hospitals. These nurses were chosen using a stratified random sampling method. The data were collected using the Multicomponent Assessment of Computer Literacy and the Pretest for Attitudes Towards Computers in Healthcare Assessment Scale v. 2. The nurses, in general, had positive attitudes towards computers, and their computer literacy was good. Computer literacy in general had significant positive correlations with individual elements of computer competency and with attitudes towards computers. If the computer is to be an effective and beneficial part of the health-care system, it is necessary to help nurses improve their computer competency. © 2014 Wiley Publishing Asia Pty Ltd.

  9. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  10. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  11. Visual ergonomics and computer work--is it all about computer glasses?

    Science.gov (United States)

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  12. Computing with concepts, computing with numbers: Llull, Leibniz, and Boole

    NARCIS (Netherlands)

    Uckelman, S.L.

    2010-01-01

    We consider two ways to understand "reasoning as computation", one which focuses on the computation of concept symbols and the other on the computation of number symbols. We illustrate these two ways with Llull’s Ars Combinatoria and Leibniz’s attempts to arithmetize language, respectively. We then

  13. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  14. Computational biomechanics

    International Nuclear Information System (INIS)

    Ethier, C.R.

    2004-01-01

    Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)

  15. Roadmap to greener computing

    CERN Document Server

    Nguemaleu, Raoul-Abelin Choumin

    2014-01-01

    A concise and accessible introduction to green computing and green IT, this book addresses how computer science and the computer infrastructure affect the environment and presents the main challenges in making computing more environmentally friendly. The authors review the methodologies, designs, frameworks, and software development tools that can be used in computer science to reduce energy consumption and still compute efficiently. They also focus on Computer Aided Design (CAD) and describe what design engineers and CAD software applications can do to support new streamlined business directi

  16. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  18. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT,J.

    2004-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  19. Computer mathematics for programmers

    CERN Document Server

    Abney, Darrell H; Sibrel, Donald W

    1985-01-01

    Computer Mathematics for Programmers presents the Mathematics that is essential to the computer programmer.The book is comprised of 10 chapters. The first chapter introduces several computer number systems. Chapter 2 shows how to perform arithmetic operations using the number systems introduced in Chapter 1. The third chapter covers the way numbers are stored in computers, how the computer performs arithmetic on real numbers and integers, and how round-off errors are generated in computer programs. Chapter 4 details the use of algorithms and flowcharting as problem-solving tools for computer p

  20. Ubiquitous Computing: The Universal Use of Computers on College Campuses.

    Science.gov (United States)

    Brown, David G., Ed.

    This book is a collection of vignettes from 13 universities where everyone on campus has his or her own computer. These 13 institutions have instituted "ubiquitous computing" in very different ways at very different costs. The chapters are: (1) "Introduction: The Ubiquitous Computing Movement" (David G. Brown); (2) "Dartmouth College" (Malcolm…

  1. Activity-Driven Computing Infrastructure - Pervasive Computing in Healthcare

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Olesen, Anders Konring

    In many work settings, and especially in healthcare, work is distributed among many cooperating actors, who are constantly moving around and are frequently interrupted. In line with other researchers, we use the term pervasive computing to describe a computing infrastructure that supports work...

  2. Computer Skills Training and Readiness to Work with Computers

    Directory of Open Access Journals (Sweden)

    Arnon Hershkovitz

    2016-05-01

    Full Text Available In today’s job market, computer skills are part of the prerequisites for many jobs. In this paper, we report on a study of readiness to work with computers (the dependent variable among unemployed women (N=54 after participating in a unique, web-supported training focused on computer skills and empowerment. Overall, the level of participants’ readiness to work with computers was much higher at the end of the course than it was at its begin-ning. During the analysis, we explored associations between this variable and variables from four categories: log-based (describing the online activity; computer literacy and experience; job-seeking motivation and practice; and training satisfaction. Only two variables were associated with the dependent variable: knowledge post-test duration and satisfaction with content. After building a prediction model for the dependent variable, another log-based variable was highlighted: total number of actions in the course website along the course. Overall, our analyses shed light on the predominance of log-based variables over variables from other categories. These findings might hint at the need of developing new assessment tools for learners and trainees that take into consideration human-computer interaction when measuring self-efficacy variables.

  3. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  4. Reconfigurable computing the theory and practice of FPGA-based computation

    CERN Document Server

    Hauck, Scott

    2010-01-01

    Reconfigurable Computing marks a revolutionary and hot topic that bridges the gap between the separate worlds of hardware and software design- the key feature of reconfigurable computing is its groundbreaking ability to perform computations in hardware to increase performance while retaining the flexibility of a software solution. Reconfigurable computers serve as affordable, fast, and accurate tools for developing designs ranging from single chip architectures to multi-chip and embedded systems. Scott Hauck and Andre DeHon have assembled a group of the key experts in the fields of both hardwa

  5. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  6. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  7. Cloud Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  8. Cloud Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  9. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...

  10. A study on the computerization of secondary side on-line chemistry monitoring system of PWR

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Kyung Lin; Lee, Eun Heui [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-12-01

    A computer system for on-line chemistry monitoring system located in secondary side of PWR plant is under developing. Keithley 500 A mainframe and AMM1A and AIM3A modules are used for data acquisition and scientific and engineering software package of ASYST is used for developing software program. The contents are as follows: (1) Data acquisition and real-time display. The output signals of monitoring chemical sensors are stored in PC showing real-time data display as true values and graphics. (2) Data management and trending graphs. The data stored in PC are outcoming in various graphic mode for data management such as simple trending graphs screen display, time duration plot and histogram plot. (3) Daily basis data manual input. The chemical analysis data of grab sample are stored in PC by manual input for supplement data. (4) Tabular data report preparation. Summarized daily, weekly, monthly, quarterly and yearly reports are prepared with various mode of graphic display. 6 figs, 9 tabs, 8 refs. (Author).

  11. A study on the computerization of secondary side on-line chemistry monitoring system of PWR

    International Nuclear Information System (INIS)

    Yang, Kyung Lin; Lee, Eun Heui

    1994-12-01

    A computer system for on-line chemistry monitoring system located in secondary side of PWR plant is under developing. Keithley 500 A mainframe and AMM1A and AIM3A modules are used for data acquisition and scientific and engineering software package of ASYST is used for developing software program. The contents are as follows: 1) Data acquisition and real-time display. The output signals of monitoring chemical sensors are stored in PC showing real-time data display as true values and graphics. 2) Data management and trending graphs. The data stored in PC are outcoming in various graphic mode for data management such as simple trending graphs screen display, time duration plot and histogram plot. 3) Daily basis data manual input. The chemical analysis data of grab sample are stored in PC by manual input for supplement data. 4) Tabular data report preparation. Summarized daily, weekly, monthly, quarterly and yearly reports are prepared with various mode of graphic display. 6 figs, 9 tabs, 8 refs. (Author)

  12. CernDOC, a fertile ground for the web’s inception.

    CERN Multimedia

    Jordan Juras

    2010-01-01

    It is widely acknowledged that the World Wide Web took its first steps towards success at CERN. However, lesser well known is that earlier in the 80s, CERN teams had already developed CERNDOC, a very advanced documentation system, and one of the first to implement the client-server model. This same idea was later used in the development of the Web.   The scheme used by Tim Berners-Lee to present the web. In yellow, the CERNDOC box. The CERNDOC initiative was pitched in the early 1980s by the DD division (now the IT division) as a solution for sharing and storing documentation produced by the physicists and engineers at CERN. At this point in time, the central computing facilities were based on IBM mainframes. Although PCs, Macintoshes and laser printers did not yet exist, network development was already active. “BITNET provided the networking capacity necessary for the IBM VM/CMS platform and it was employed by the CERNDOC project. The VM/CMS operating system provided users with private...

  13. A data management program for the Electra 800 automatic analyser.

    Science.gov (United States)

    Cambus, J P; Nguyen, F; de Graeve, J; Aragon, B; Valdiguie, P

    1994-10-01

    The Electra 800 automatic coagulation analyser rapidly performs most chronometric coagulation tests with high precision. To facilitate data handling, software, adaptable to any PC running under MS-DOS, was written to manage the analyser. Data are automatically collected via the RS232 interface or can be manually input. The software can handle 64 different analyses, all entirely 'user defined'. An 'electronic worksheet' presents the results in pages of ten patients. This enables the operator to assess the data and to perform verifications or complementary tests if necessary. All results outside a predetermined range can be flagged and results can be deleted, modified or added. A patient's previous files can be recalled as the data are archived at the end of the day. A 120 Mb disk can store approximately 130,000 patient files. A daily archive function can print the day's work in alphabetical order. A communication protocol allows connection to a mainframe computer. This program and the user's manual are available on request, free of charge, from the authors.

  14. Progress of data processing system in JT-60 utilizing the UNIX-based workstations

    International Nuclear Information System (INIS)

    Sakata, Shinya; Kiyono, Kimihiro; Oshima, Takayuki; Sato, Minoru; Ozeki, Takahisa

    2007-07-01

    JT-60 data processing system (DPS) possesses three-level hierarchy. At the top level of hierarchy is JT-60 inter-shot processor (MSP-ISP), which is a mainframe computer, provides communication with the JT-60 supervisory control system and supervises the internal communication inside the DPS. The middle level of hierarchy has minicomputers and the bottom level of hierarchy has individual diagnostic subsystems, which consist of the CAMAC and VME modules. To meet the demand for advanced diagnostics, the DPS has been progressed in stages from a three-level hierarchy system, which was dependent on the processing power of the MSP-ISP, to a two-level hierarchy system, which is decentralized data processing system (New-DPS) by utilizing the UNIX-based workstations and network technology. This replacement had been accomplished, and the New-DPS has been started to operate in October 2005. In this report, we describe the development and improvement of the New-DPS, whose functions were decentralized from the MSP-ISP to the UNIX-based workstations. (author)

  15. AUS98 - The 1998 version of the AUS modular neutronic code system

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, G.S.; Harrington, B.V

    1998-07-01

    AUS is a neutronics code system which may be used for calculations of a wide range of fission reactors, fusion blankets and other neutron applications. The present version, AUS98, has a nuclear cross section library based on ENDF/B-VI and includes modules which provide for reactor lattice calculations, one-dimensional transport calculations, multi-dimensional diffusion calculations, cell and whole reactor burnup calculations, and flexible editing of results. Calculations of multi-region resonance shielding, coupled neutron and photon transport, energy deposition, fission product inventory and neutron diffusion are combined within the one code system. The major changes from the previous AUS publications are the inclusion of a cross-section library based on ENDF/B-VI, the addition of the MICBURN module for controlling whole reactor burnup calculations, and changes to the system as a consequence of moving from IBM main-frame computers to UNIX workstations This report gives details of all system aspects of AUS and all modules except the POW3D multi-dimensional diffusion module refs., tabs.

  16. AUS98 - The 1998 version of the AUS modular neutronic code system

    International Nuclear Information System (INIS)

    Robinson, G.S.; Harrington, B.V.

    1998-07-01

    AUS is a neutronics code system which may be used for calculations of a wide range of fission reactors, fusion blankets and other neutron applications. The present version, AUS98, has a nuclear cross section library based on ENDF/B-VI and includes modules which provide for reactor lattice calculations, one-dimensional transport calculations, multi-dimensional diffusion calculations, cell and whole reactor burnup calculations, and flexible editing of results. Calculations of multi-region resonance shielding, coupled neutron and photon transport, energy deposition, fission product inventory and neutron diffusion are combined within the one code system. The major changes from the previous AUS publications are the inclusion of a cross-section library based on ENDF/B-VI, the addition of the MICBURN module for controlling whole reactor burnup calculations, and changes to the system as a consequence of moving from IBM main-frame computers to UNIX workstations This report gives details of all system aspects of AUS and all modules except the POW3D multi-dimensional diffusion module

  17. Development and implementation of a new radiation exposure record system

    International Nuclear Information System (INIS)

    Lyon, M.; Berndt, V.L.; Trevino, G.W.; Oakley, B.M.

    1993-01-01

    The Hanford Radiological Records Program (HRRP) maintains all available radiation exposure records created since the 1940s for employees of and visitors to the Hanford Site. The program provides exposure status reports to the US Department of Energy Richland Operations Office (DOE-RL) and the Hanford contractors, annual exposure reports to individual employees and visitors, and exposure reports to terminated employees. Program staff respond to offsite requests for exposure data on former employees and supply data and reports for epidemiological and research projects as well as for annual reports required by DOE Orders. Historical files, documenting radiation protection and dosimetry policies, procedures, and practices, and radiological incidents, are also maintained under this program. The program is operated by Pacific Northwest Laboratory for DOE-RL. This paper describes how the record-keeping requirements supported by the HRRP's computerized radiation exposure database were analyzed so that the system could be redeveloped and implemented to (1) accommodate a change in mainframe computer units, and (2) to enhance its automated record-keeping, retrieval, and reporting capabilities in support of the HRRP

  18. Integrated Reliability and Risk Analysis System (IRRAS) Version 2.0 user's guide

    International Nuclear Information System (INIS)

    Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.

    1990-06-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Also provided in the system is an integrated full-screen editor for use when interfacing with remote mainframe computer systems. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 2.0 and is the subject of this user's guide. Version 2.0 of IRRAS provides all of the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance. 9 refs., 292 figs., 4 tabs

  19. Computer Virus and Trends

    OpenAIRE

    Tutut Handayani; Soenarto Usna,Drs.MMSI

    2004-01-01

    Since its appearance the first time in the mid-1980s, computer virus has invited various controversies that still lasts to this day. Along with the development of computer systems technology, viruses komputerpun find new ways to spread itself through a variety of existing communications media. This paper discusses about some things related to computer viruses, namely: the definition and history of computer viruses; the basics of computer viruses; state of computer viruses at this time; and ...

  20. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...