WorldWideScience

Sample records for computing fy10-11 implementation

  1. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  2. Computer Implementation Strategies and Processes.

    Science.gov (United States)

    Cyros, Kreon L.

    1984-01-01

    Implementing a computer-aided facilities management program begins with establishing priorities, determining the computer capability available, and determining the necessary budget. A space planning and management model is presented with techniques for the collection and storage of the required data. (MLF)

  3. Quantum computers: Definition and implementations

    International Nuclear Information System (INIS)

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  4. Implementation of an embedded computer

    OpenAIRE

    Pikl, Bojan

    2011-01-01

    The goal of this thesis is to describe a production of an embedded computer. The thesis describes development and production of an embedded computer for the medical diode laser DL30 that is being developed in Robomed d.o.o.. The first part of the thesis describes the choice of hardware devices. I mostly describe the technologies that one can buy on the market. Moreover for every part of the computer installed and developed there is an argument why we selected that exact part. The second part ...

  5. Implementation of cloud computing in higher education

    Science.gov (United States)

    Asniar; Budiawan, R.

    2016-04-01

    Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.

  6. Implementing and developing cloud computing applications

    CERN Document Server

    Sarna, David E Y

    2010-01-01

    From small start-ups to major corporations, companies of all sizes have embraced cloud computing for the scalability, reliability, and cost benefits it can provide. It has even been said that cloud computing may have a greater effect on our lives than the PC and dot-com revolutions combined.Filled with comparative charts and decision trees, Implementing and Developing Cloud Computing Applications explains exactly what it takes to build robust and highly scalable cloud computing applications in any organization. Covering the major commercial offerings available, it provides authoritative guidan

  7. Computer Aided Implementation using Xilinx System Generator

    OpenAIRE

    Eriksson, Henrik

    2004-01-01

    The development in electronics increases the demand for good design methods and design tools in the field of electrical engeneering. To improve their design methods Ericsson Microwave Systems AB is interested in using computer tools to create a link between the specification and the implementation of a digital system in a FPGA. Xilinx System Generator for DSP is a tool for implementing a model of a digital signalprocessing algorithm in a Xilinx FPGA. To evaluate Xilinx System Generator two t...

  8. Numerical Implementation and Computer Simulation of Tracer ...

    African Journals Online (AJOL)

    Numerical Implementation and Computer Simulation of Tracer Experiments in a Physical Aquifer Model. ... African Research Review ... A sensitivity analysis showed that the time required for complete source depletion, was most dependent on the source definition and the hydraulic conductivity K of the porous medium.

  9. Computation and parallel implementation for early vision

    Science.gov (United States)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  10. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  11. Quantum computing implementations with neutral particles

    DEFF Research Database (Denmark)

    Negretti, Antonio; Treutlein, Philipp; Calarco, Tommaso

    2011-01-01

    We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our discu...... optimal control theory might be a powerful tool to enhance the speed up of the gate operations as well as to achieve high fidelities required for fault tolerant quantum computation.......We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our...... discussion mainly on collisional quantum gates, which are best suited for atom-chip-like devices, as well as on gate proposals conceived for optical lattices. Additionally, we analyze schemes both for cold atoms confined in optical cavities and hybrid approaches to entanglement generation, and we show how...

  12. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  13. Methodology of Implementation of Computer Forensics

    OpenAIRE

    Gelev, Saso; Golubovski, Roman; Hristov, Risto; Nikolov, Elenior

    2013-01-01

    Compared to other sciences, computer forensics (digital forensics) is a relatively young discipline. It was established in 1999 and it has been an irreplaceable tool in sanctioning cybercrime ever since. Good knowledge of computer forensics can be really helpful in uncovering a committed crime. Not adhering to the methodology of computer forensics, however, makes the obtained evidence invalid/irrelevant and as such it cannot be used in legal proceedings. This paper is to explain the methodolo...

  14. Implementation of Ontology Mapping for Computational Agents

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2006-01-01

    Roč. 1, č. 1 (2006), s. 58-63 ISSN 1991-8755 R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : multi-agent systems * ontology * computational intelligence Subject RIV: IN - Informatics, Computer Science

  15. Computational procedures for implementing the optimal control ...

    African Journals Online (AJOL)

    The Extended Conjugate Gradient Method, ECGM, [1] was used to compute the control and state gradients of the unconstrained optimal control problem for higher-order nondispersive wave. Also computed are the descent directions for both the control and the state variables. These functions are the most important ...

  16. Implementing ASPEN on the CRAY computer

    International Nuclear Information System (INIS)

    Duerre, K.H.; Bumb, A.C.

    1981-01-01

    This paper describes our experience in converting the ASPEN program for use on our CRAY computers at the Los Alamos National Laboratory. The CRAY computer is two-to-five times faster than a CDC-7600 for scalar operations, is equipped with up to two million words of high-speed storage, and has vector processing capability. Thus, the CRAY is a natural candidate for programs that are the size and complexity of ASPEN. Our approach to converting ASPEN and the conversion problems are discussed, including our plans for optimizing the program. Comparisons of run times for test problems between the CRAY and IBM 370 computer versions are presented

  17. Software Defined Radio Datalink Implementation Using PC-Type Computers

    National Research Council Canada - National Science Library

    Zafeiropoulos, Georgios

    2003-01-01

    The objective of this thesis was to examine the feasibility of implementation and the performance of a Software Defined Radio datalink, using a common PC type host computer and a high level programming language...

  18. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  19. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  20. Computers in schools: implementing for sustainability. Why the truth ...

    African Journals Online (AJOL)

    This study investigates influences on the sustainability of a computers-in-schools project during the implementation phase thereof. The Computer Assisted Learning in Schools (CALIS) Project (1992–1996) is the unit of analysis. A qualitative case study research design is used to elicit data, in the form of participant ...

  1. Implementation of DFT application on ternary optical computer

    Science.gov (United States)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  2. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  3. Design and implementation of a local computer network

    Energy Technology Data Exchange (ETDEWEB)

    Fortune, P. J.; Lidinsky, W. P.; Zelle, B. R.

    1977-01-01

    An intralaboratory computer communications network was designed and is being implemented at Argonne National Laboratory. Parameters which were considered to be important in the network design are discussed; and the network, including its hardware and software components, is described. A discussion of the relationship between computer networks and distributed processing systems is also presented. The problems which the network is designed to solve and the consequent network structure represent considerations which are of general interest. 5 figures.

  4. Implementation of Computer Assisted Audit Techniques in Application Controls Testing

    OpenAIRE

    Dejan Jakšić

    2009-01-01

    This paper examines possibilities of implementation of advanced computer assisted audit techniques into verification of efficiency and effectiveness of application controls. Application controls i.e. input, processing and output controls should ensure the completeness and accuracy of records. The main computer assisted audit techniques could be categorized as: test data, integrated test facility, parallel simulation and online audit monitor. There is a possibility of utilization of these tech...

  5. Faculty of Education Students' Computer Self-Efficacy Beliefs and Their Attitudes towards Computers and Implementing Computer Supported Education

    Science.gov (United States)

    Berkant, Hasan Güner

    2016-01-01

    This study investigates faculty of education students' computer self-efficacy beliefs and their attitudes towards computers and implementing computer supported education. This study is descriptive and based on a correlational survey model. The final sample consisted of 414 students studying in the faculty of education of a Turkish university. The…

  6. Learning Computer Programming: Implementing a Fractal in a Turing Machine

    Science.gov (United States)

    Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.

    2010-01-01

    It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…

  7. Abstraction to Implementation: A Two Stage Introduction to Computer Science.

    Science.gov (United States)

    Wolz, Ursula; Conjura, Edward

    A three-semester core curriculum for undergraduate computer science is proposed and described. Both functional and imperative programming styles are taught. The curriculum particularly addresses the problem of effectively presenting both abstraction and implementation. Two courses in the first semester emphasize abstraction. The next courses…

  8. VLSI circuits implementing computational models of neocortical circuits.

    Science.gov (United States)

    Wijekoon, Jayawan H B; Dudek, Piotr

    2012-09-15

    This paper overviews the design and implementation of three neuromorphic integrated circuits developed for the COLAMN ("Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex") project. The circuits are implemented in a standard 0.35 μm CMOS technology and include spiking and bursting neuron models, and synapses with short-term (facilitating/depressing) and long-term (STDP and dopamine-modulated STDP) dynamics. They enable execution of complex nonlinear models in accelerated-time, as compared with biology, and with low power consumption. The neural dynamics are implemented using analogue circuit techniques, with digital asynchronous event-based input and output. The circuits provide configurable hardware blocks that can be used to simulate a variety of neural networks. The paper presents experimental results obtained from the fabricated devices, and discusses the advantages and disadvantages of the analogue circuit approach to computational neural modelling. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. The Implementation of Computer Data Processing Software for EAST NBI

    Science.gov (United States)

    Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong

    2014-10-01

    One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy to the plasma of the EAST superconducting tokamak. Correspondingly, it's necessary to construct a high-speed and reliable computer data processing system for processing experimental data, such as data acquisition, data compression and storage, data decompression and query, as well as data analysis. The implementation of computer data processing application software (CDPS) for EAST NBI is presented in this paper in terms of its functional structure and system realization. The set of software is programmed in C language and runs on Linux operating system based on TCP network protocol and multi-threading technology. The hardware mainly includes industrial control computer (IPC), data server, PXI DAQ cards and so on. Now this software has been applied to EAST NBI system, and experimental results show that the CDPS can serve EAST NBI very well.

  10. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  11. Computer-implemented gaze interaction method and apparatus

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of communicating via interaction with a user-interface based on a person's gaze and gestures, comprising: computing an estimate of the person's gaze comprising computing a point-of-regard on a display through which the person observes a scene in front of him; by means...... of a scene camera, capturing a first image of a scene in front of the person's head (and at least partially visible on the display) and computing the location of an object coinciding with the person's gaze; by means of the scene camera, capturing at least one further image of the scene in front of the person......'s head, and monitoring whether the gaze dwells on the recognised object; and while gaze dwells on the recognised object: firstly, displaying a user interface element, with a spatial expanse, on the display face in a region adjacent to the point-of-regard; and secondly, during movement of the display...

  12. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  13. Cluster implementation for parallel computation within MATLAB software environment

    International Nuclear Information System (INIS)

    Santana, Antonio O. de; Dantas, Carlos C.; Charamba, Luiz G. da R.; Souza Neto, Wilson F. de; Melo, Silvio B. Melo; Lima, Emerson A. de O.

    2013-01-01

    A cluster for parallel computation with MATLAB software the COCGT - Cluster for Optimizing Computing in Gamma ray Transmission methods, is implemented. The implementation correspond to creation of a local net of computers, facilities and configurations of software, as well as the accomplishment of cluster tests for determine and optimizing of performance in the data processing. The COCGT implementation was required by data computation from gamma transmission measurements applied to fluid dynamic and tomography reconstruction in a FCC-Fluid Catalytic Cracking cold pilot unity, and simulation data as well. As an initial test the determination of SVD - Singular Values Decomposition - of random matrix with dimension (n , n), n=1000, using the Girco's law modified, revealed that COCGT was faster in comparison to the literature [1] cluster, which is similar and operates at the same conditions. Solution of a system of linear equations provided a new test for the COCGT performance by processing a square matrix with n=10000, computing time was 27 s and for square matrix with n=12000, computation time was 45 s. For determination of the cluster behavior in relation to 'parfor' (parallel for-loop) and 'spmd' (single program multiple data), two codes were used containing those two commands and the same problem: determination of SVD of a square matrix with n= 1000. The execution of codes by means of COCGT proved: 1) for the code with 'parfor', the performance improved with the labs number from 1 to 8 labs; 2) for the code 'spmd', just 1 lab (core) was enough to process and give results in less than 1 s. In similar situation, with the difference that now the SVD will be determined from square matrix with n1500, for code with 'parfor', and n=7000, for code with 'spmd'. That results take to conclusions: 1) for the code with 'parfor', the behavior was the same already described above; 2) for code with 'spmd', the same besides having produced a larger performance, it supports a

  14. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  15. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  16. A scalable implementation of RI-SCF on parallel computers

    International Nuclear Information System (INIS)

    Fruechtl, H.A.; Kendall, R.A.; Harrison, R.J.

    1996-01-01

    In order to avoid the integral bottleneck of conventional SCF calculations, the Resolution of the Identity (RI) method is used to obtain an approximate solution to the Hartree-Fock equations. In this approximation only three-center integrals are needed to build the Fock matrix. It has been implemented as part of the NWChem package of portable and scalable ab initio programs for parallel computers. Utilizing the V-approximation, both the Coulomb and exchange contribution to the Fock matrix can be calculated from a transformed set of three-center integrals which have to be precalculated and stored. A distributed in-core method as well as a disk based implementation have been programmed. Details of the implementation as well as the parallel programming tools used are described. We also give results and timings from benchmark calculations

  17. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  18. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  19. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  20. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  1. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    Science.gov (United States)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  2. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  3. SOFTWARE TOOLS FOR COMPUTING EXPERIMENT AIMED AT MULTIVARIATE ANALYSIS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    A. V. Tyurin

    2015-09-01

    Full Text Available A concept for organization and planning of computational experiment aimed at implementation of multivariate analysis of complex multifactor models is proposed. It is based on the generation of calculations tree. The logical and structural schemes of the tree are given and software tools, as well, for the automation of work with it: calculation generation, carrying out calculations and analysis of the obtained results. Computer modeling systems and such special-purpose systems as RACS and PRADIS do not solve the problems connected with effective carrying out of computational experiment, consisting of its organization, planning, execution and analysis of the results. Calculation data storage for computational experiment organization is proposed in the form of input and output data tree. Each tree node has a reference to the calculation of model step performed earlier. The storage of calculations tree is realized in a specially organized directory structure. A software tool is proposed for creating and modifying design scheme that stores the structure of one branch of the calculation tree with the view of effective planning of multivariate calculations. A set of special-purpose software tools gives the possibility for the quick generation and modification of the tree, addition of calculations with step-by-step change in the model factors. To perform calculations, software environment in the form of a graphical user interface for creating and modifying calculation script has been developed. This environment makes it possible to traverse calculation tree in a certain order and to perform serial and parallel initiation of computational modules. To analyze the results, software tool has been developed, operating on the base of the tag tree. It is a special tree that stores input and output data of the calculations in the set of changes form of appropriate model factors. The tool enables to select the factors and responses of the model at various steps

  4. Search systems and computer-implemented search methods

    Science.gov (United States)

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  5. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  6. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Bly, Aaron

    2017-06-01

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based procedure in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker

  7. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  8. Three-dimensional pseudo-random number generator for implementing in hybrid computer systems

    International Nuclear Information System (INIS)

    Ivanov, M.A.; Vasil'ev, N.P.; Voronin, A.V.; Kravtsov, M.Yu.; Maksutov, A.A.; Spiridonov, A.A.; Khudyakova, V.I.; Chugunkov, I.V.

    2012-01-01

    The algorithm for generating pseudo-random numbers oriented to implementation by using hybrid computer systems is considered. The proposed solution is characterized by a high degree of parallel computing [ru

  9. Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation

    Science.gov (United States)

    2015-07-01

    4 Efficient Mobile Oblivious Computation ( EMOC ) .................................................................... 4 Memory...Assumptions and Procedures  Efficient Mobile Oblivious Computation ( EMOC )  Mobile applications increasingly require users to surrender private...In this effort, we developed Efficient Mobile Oblivious Computation ( EMOC ), a set of SFE protocols customized for the mobile platform. Using

  10. Computer arithmetic and validity theory, implementation, and applications

    CERN Document Server

    Kulisch, Ulrich

    2013-01-01

    This is the revised and extended second edition of the successful basic book on computer arithmetic. It is consistent with the newest recent standard developments in the field. The book shows how the arithmetic capability of the computer can be enhanced. The work is motivated by the desire and the need to improve the accuracy of numerical computing and to control the quality of the computed results (validity). The accuracy requirements for the elementary floating-point operations are extended to the customary product spaces of computations including interval spaces. The mathematical properties

  11. Design and implementation of distributed spatial computing node based on WPS

    International Nuclear Information System (INIS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-01-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed

  12. Computer Implementation of the Two-Factor DP Model for ...

    African Journals Online (AJOL)

    A computer program known as Program Simplex which takes advantage of this sparseness has been applied to obtain an optimal solution to the manpower planning problem presented. It has also been observed that LP models with few nonzero coefficients can easily be solved by using a computer to obtain an optimal ...

  13. Secure Cloud Computing Implementation Study For Singapore Military Operations

    Science.gov (United States)

    2016-09-01

    Computing in Healthcare. Adapted from [13]. Benefits Cloud Computing  Clinical Research  Electronic Medical Records  Collaboration Solutions...medium to send orders to tactical action units, the cloud should also contain a feature to verify that the action units have received and understood the...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. SECURE CLOUD

  14. Projecting Grammatical Features in Nominals: Cognitive Processing Theory & Computational Implementation

    Science.gov (United States)

    2010-03-01

    functionality and plausibility distinguishes this research from most research in computational linguistics and computational psycholinguistics . The... Psycholinguistic Theory There is extensive psycholinguistic evidence that human language processing is essentially incremental and interactive...challenges of psycholinguistic research is to explain how humans can process language effortlessly and accurately given the complexity and ambiguity that is

  15. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  16. Implementation of Computer Multimedia for Diabetes Prevention in African-American Women

    OpenAIRE

    Gerber, Ben; Davis, Kara; Wideman, Danita; Berbaum, Michael

    2005-01-01

    Two urban churches received touch-screen computers with health education software installed. The software included a multimedia application on diabetes risk factor reduction tailored for African-American women. A “Computer Promoter” was recruited at each church to encourage computer use and provide basic technical support. After one year following implementation, two focus groups of congregants discussed barriers to computer use. Computer usage was related to church leadersh...

  17. Implementation of Keystroke Dynamics for Authentication in Computer Systems

    Directory of Open Access Journals (Sweden)

    S. V. Skuratov

    2010-06-01

    Full Text Available Implementation of keystroke dynamics in multifactor authentication systems is described in the article. Original access control system based on totality of matchers is presented. Testing results and useful recommendations are also adduced.

  18. ARTS III Computer Systems Performance Measurement Prototype Implementation

    Science.gov (United States)

    1974-04-01

    Direct measurement of computer systems is of vital importance in: a) developing an intelligent grasp of the variables which affect overall performance; b)tuning the systsem for optimum benefit; c)determining under what conditions saturation threshold...

  19. Implementation of Cloud Computing into VoIP

    Directory of Open Access Journals (Sweden)

    Floriana GEREA

    2012-08-01

    Full Text Available This article defines Cloud Computing and highlights key concepts, the benefits of using virtualization, its weaknesses and ways of combining it with classical VoIP technologies applied to large scale businesses. The analysis takes into consideration management strategies and resources for better customer orientation and risk management all for sustaining the Service Level Agreement (SLA. An important issue in cloud computing can be security and for this reason there are several security solution presented.

  20. Implementation, of the superfish computer code in the CYBER computer system of IEAV (Instituto de Estudos Avancados) in Brazil

    International Nuclear Information System (INIS)

    Silva, R. da.

    1982-10-01

    The computer code SUPERFISH has been implemented in CYBER - IEAv computer system. This code locates eletromagnetic modes in rf ressonant cavities. The manipulation of the boundary conditions and of the driving point was optimized. A computer program (ARRUELA) was developed in order to make easier SUPERFISH analysis of the rf properties of disc-and-washer cavities. This version of SUPERFISH showed satisfactory performance under tests. (Author) [pt

  1. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  2. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  3. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  4. Naval Computer-Based Instruction: Cost, Implementation and Effectiveness Issues.

    Science.gov (United States)

    1988-03-01

    that the designer forgot to take certain critical factors into account. The Navy has had its share of implementation mistakes. Many of the mistakes are...Vol. 3, No. 1, February 1988. 96 McDonald, Barbara A. and Crawford, Alice M., NPRDC Technical Report 87-3, "Microcomputer-based Electronic

  5. Implementing an ROI Measurement Process at Dell Computer.

    Science.gov (United States)

    Tesoro, Ferdinand

    1998-01-01

    This return-on-investment (ROI) evaluation study determined the business impact of the sales negotiation training course to Dell Computer Corporation. A five-step ROI measurement process was used: Plan-Develop-Analyze-Communicate-Leverage. The corporate sales information database was used to compare pre- and post-training metrics for both training…

  6. Implementing Computer Mediated Communication in the College Classroom.

    Science.gov (United States)

    Clay-Warner, Jody; Marsh, Kristin

    2000-01-01

    Examines the use of computer-mediated communication (CMC) in the college classroom using survey data from 89 undergraduate sociology students. Discusses advantages and disadvantages of conferencing systems, either through local area networks or Internet-based systems; preferred uses of CMC; and results of regression analyses. (Author/LRW)

  7. Prolog as description and implementation language in computer science teaching

    DEFF Research Database (Denmark)

    Christiansen, Henning

    Prolog is a powerful pedagogical instrument for theoretical elements of computer science when used as combined description language and experimentation tool. A teaching methodology based on this principle has been developed and successfully applied in a context with a heterogeneous student...

  8. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  9. A computer implementation of a theory of human stereo vision.

    Science.gov (United States)

    Grimson, W E

    1981-05-12

    Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

  10. Selection and implementation of a laboratory computer system.

    Science.gov (United States)

    Moritz, V A; McMaster, R; Dillon, T; Mayall, B

    1995-07-01

    The process of selection of a pathology computer system has become increasingly complex as there are an increasing number of facilities that must be provided and stringent performance requirements under heavy computing loads from both human users and machine inputs. Furthermore, the continuing advances in software and hardware technology provide more options and innovative new ways of tackling problems. These factors taken together pose a difficult and complex set of decisions and choices for the system analyst and designer. The selection process followed by the Microbiology Department at Heidelberg Repatriation Hospital included examination of existing systems, development of a functional specification followed by a formal tender process. The successful tenderer was then selected using predefined evaluation criteria. The successful tenderer was a software development company that developed and supplied a system based on a distributed network using a SUN computer as the main processor. The software was written using Informix running on the UNIX operating system. This represents one of the first microbiology systems developed using a commercial relational database and fourth generation language. The advantages of this approach are discussed.

  11. Public policy and regulatory implications for the implementation of Opportunistic Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning; Henten, Anders

    2012-01-01

    Opportunistic Cloud Computing Services (OCCS) is a social network approach to the provisioning and management of cloud computing services for enterprises. This paper discusses how public policy and regulations will impact on OCCS implementation. We rely on documented publicly available government...... and corporate policies on the adoption of cloud computing services and deduce the impact of these policies on their adoption of opportunistic cloud computing services. We conclude that there are regulatory challenges on data protection that raises issues for cloud computing adoption in general; and the lack...... of a single globally accepted data protection standard poses some challenges for very successful implementation of OCCS for companies. However, the direction of current public and corporate policies on cloud computing make a good case for them to try out opportunistic cloud computing services....

  12. Implementation of Fog Computing for Reliable E-Health Applications

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Mihaylov, Mihail Rumenov

    2015-01-01

    This paper addresses the current technical challenge of an impedance mismatch between the requirements of smart connected object applications within the sensing environment and the characteristics of today’s cloud infrastructure. This research work investigates the possibility to offload cloud...... tasks, such as storage and data signal processing to the edge of the network, thus decreasing the latency associated with performing those tasks within the cloud. The research scenario is an e-Health laboratory implementation where the real-time processing is performed by the home PC, while...... the extracted metadata is sent to the cloud for further processing...

  13. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  14. Quantum computation: algorithms and implementation in quantum dot devices

    Science.gov (United States)

    Gamble, John King

    In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques

  15. Understanding underspecification: A comparison of two computational implementations.

    Science.gov (United States)

    Logačev, Pavel; Vasishth, Shravan

    2016-01-01

    Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201-216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558-592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.'s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between

  16. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  17. Implementing iRound: A Computer-Based Auditing Tool.

    Science.gov (United States)

    Brady, Darcie

    Many hospitals use rounding or auditing as a tool to help identify gaps and needs in quality and process performance. Some hospitals are also using rounding to help improve patient experience. It is known that purposeful rounding helps improve Hospital Consumer Assessment of Healthcare Providers and Systems scores by helping manage patient expectations, provide service recovery, and recognize quality caregivers. Rounding works when a standard method is used across the facility, where data are comparable and trustworthy. This facility had a pen-and-paper process in place that made data reporting difficult, created a silo culture between departments, and most audits and rounds were completed differently on each unit. It was recognized that this facility needed to standardize the rounding and auditing process. The tool created by the Advisory Board called iRound was chosen as the tool this facility would use for patient experience rounds as well as process and quality rounding. The success of the iRound tool in this facility depended on several factors that started many months before implementation to current everyday usage.

  18. Short-term effects of implemented high intensity shoulder elevation during computer work

    OpenAIRE

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; S?gaard, Karen; Holtermann, Andreas

    2009-01-01

    Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during c...

  19. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  20. Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing.

    Science.gov (United States)

    Larger, L; Soriano, M C; Brunner, D; Appeltant, L; Gutierrez, J M; Pesquera, L; Mirasso, C R; Fischer, I

    2012-01-30

    Many information processing challenges are difficult to solve with traditional Turing or von Neumann approaches. Implementing unconventional computational methods is therefore essential and optics provides promising opportunities. Here we experimentally demonstrate optical information processing using a nonlinear optoelectronic oscillator subject to delayed feedback. We implement a neuro-inspired concept, called Reservoir Computing, proven to possess universal computational capabilities. We particularly exploit the transient response of a complex dynamical system to an input data stream. We employ spoken digit recognition and time series prediction tasks as benchmarks, achieving competitive processing figures of merit.

  1. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  2. Short-term effects of implemented high intensity shoulder elevation during computer work.

    Science.gov (United States)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; Søgaard, Karen; Holtermann, Andreas

    2009-08-10

    Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min) measured, and bipolar surface electromyography (EMG) recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius part during the subsequent pause from computer work (p shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a pause with preceding high intensity contraction requires further investigation before high

  3. Short-term effects of implemented high intensity shoulder elevation during computer work

    Directory of Open Access Journals (Sweden)

    Madeleine Pascal

    2009-08-01

    Full Text Available Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. Methods 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min measured, and bipolar surface electromyography (EMG recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. Results The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular trapezius part during the subsequent pause from computer work (p Conclusion Since a preceding high intensity shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a

  4. The Observation of Bahasa Indonesia Official Computer Terms Implementation in Scientific Publication

    Science.gov (United States)

    Gunawan, D.; Amalia, A.; Lydia, M. S.; Muthaqin, M. I.

    2018-03-01

    The government of the Republic of Indonesia had issued a regulation to substitute computer terms in foreign language that have been used earlier into official computer terms in Bahasa Indonesia. This regulation was stipulated in Presidential Decree No. 2 of 2001 concerning the introduction of official computer terms in Bahasa Indonesia (known as Senarai Padanan Istilah/SPI). After sixteen years, people of Indonesia, particularly for academics, should have implemented the official computer terms in their official publications. This observation is conducted to discover the implementation of official computer terms usage in scientific publications which are written in Bahasa Indonesia. The data source used in this observation are the publications by the academics, particularly in computer science field. The method used in the observation is divided into four stages. The first stage is metadata harvesting by using Open Archive Initiative - Protocol for Metadata Harvesting (OAI-PMH). Second, converting the harvested document (in pdf format) to plain text. The third stage is text-preprocessing as the preparation of string matching. Then the final stage is searching the official computer terms based on 629 SPI terms by using Boyer-Moore algorithm. We observed that there are 240,781 foreign computer terms in 1,156 scientific publications from six universities. This result shows that the foreign computer terms are still widely used by the academics.

  5. Implementation of G-computation on a simulated data set: demonstration of a causal inference technique.

    Science.gov (United States)

    Snowden, Jonathan M; Rose, Sherri; Mortimer, Kathleen M

    2011-04-01

    The growing body of work in the epidemiology literature focused on G-computation includes theoretical explanations of the method but very few simulations or examples of application. The small number of G-computation analyses in the epidemiology literature relative to other causal inference approaches may be partially due to a lack of didactic explanations of the method targeted toward an epidemiology audience. The authors provide a step-by-step demonstration of G-computation that is intended to familiarize the reader with this procedure. The authors simulate a data set and then demonstrate both G-computation and traditional regression to draw connections and illustrate contrasts between their implementation and interpretation relative to the truth of the simulation protocol. A marginal structural model is used for effect estimation in the G-computation example. The authors conclude by answering a series of questions to emphasize the key characteristics of causal inference techniques and the G-computation procedure in particular.

  6. Analysis and selection of optimal function implementations in massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  7. Patent law for computer scientists steps to protect computer-implemented inventions

    CERN Document Server

    Closa, Daniel; Giemsa, Falk; Machek, Jörg

    2010-01-01

    Written from over 70 years of experience, this overview explains patent laws across Europe, the US and Japan, and teaches readers how to think from a patent examiner's perspective. Over 10 detailed case studies are presented from different computer science applications.

  8. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    Science.gov (United States)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  9. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Science.gov (United States)

    2011-08-22

    ... explored in this series is cloud computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data...

  10. Computational implementation of the multi-mechanism deformation coupled fracture model for salt

    International Nuclear Information System (INIS)

    Koteras, J.R.; Munson, D.E.

    1996-01-01

    The Multi-Mechanism Deformation (M-D) model for creep in rock salt has been used in three-dimensional computations for the Waste Isolation Pilot Plant (WIPP), a potential waste, repository. These computational studies are relied upon to make key predictions about long-term behavior of the repository. Recently, the M-D model was extended to include creep-induced damage. The extended model, the Multi-Mechanism Deformation Coupled Fracture (MDCF) model, is considerably more complicated than the M-D model and required a different technology from that of the M-D model for a computational implementation

  11. Implementation of generalized measurements with minimal disturbance on a quantum computer

    International Nuclear Information System (INIS)

    Decker, T.; Grassl, M.

    2006-01-01

    We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  12. Implementation fidelity of a computer-assisted intervention for children with speech sound disorders.

    Science.gov (United States)

    McCormack, Jane; Baker, Elise; Masso, Sarah; Crowe, Kathryn; McLeod, Sharynne; Wren, Yvonne; Roulstone, Sue

    2017-06-01

    Implementation fidelity refers to the degree to which an intervention or programme adheres to its original design. This paper examines implementation fidelity in the Sound Start Study, a clustered randomised controlled trial of computer-assisted support for children with speech sound disorders (SSD). Sixty-three children with SSD in 19 early childhood centres received computer-assisted support (Phoneme Factory Sound Sorter [PFSS] - Australian version). Educators facilitated the delivery of PFSS targeting phonological error patterns identified by a speech-language pathologist. Implementation data were gathered via (1) the computer software, which recorded when and how much intervention was completed over 9 weeks; (2) educators' records of practice sessions; and (3) scoring of fidelity (intervention procedure, competence and quality of delivery) from videos of intervention sessions. Less than one-third of children received the prescribed number of days of intervention, while approximately one-half participated in the prescribed number of intervention plays. Computer data differed from educators' data for total number of days and plays in which children participated; the degree of match was lower as data became more specific. Fidelity to intervention procedures, competency and quality of delivery was high. Implementation fidelity may impact intervention outcomes and so needs to be measured in intervention research; however, the way in which it is measured may impact on data.

  13. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mathematics; Anderson, D.R. [Sandia National Labs., Albuquerque, NM (United States). WIPP Performance Assessments Departments; Baker, B.L. [Technadyne Engineering Consultants, Albuquerque, NM (United States)] [and others

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs.

  14. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs

  15. How to Implement Rigorous Computer Science Education in K-12 Schools? Some Answers and Many Questions

    Science.gov (United States)

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.

    2015-01-01

    Aiming to collect various concepts, approaches, and strategies for improving computer science education in K-12 schools, we edited this second special issue of the "ACM TOCE" journal. Our intention was to collect a set of case studies from different countries that would describe all relevant aspects of specific implementations of…

  16. Designing reversible arithmetic, logic circuit to implement micro-operation in quantum computation

    International Nuclear Information System (INIS)

    Kalita, Gunajit; Saikia, Navajit

    2016-01-01

    The futuristic computing is desired to be more power full with low-power consumption. That is why quantum computing has been a key area of research for quite some time and is getting more and more attention. Quantum logic being reversible, a significant amount of contributions has been reported on reversible logic in recent times. Reversible circuits are essential parts of quantum computers, and hence their designs are of great importance. In this paper, designs of reversible circuits are proposed using a recently proposed reversible gate for arithmetic and logic operations to implement various micro-operations (simple add and subtract, add with carry, subtract with borrow, transfer, incrementing, decrementing etc., and logic operations like XOR, XNOR, complementing etc.) in a reversible computer like quantum computer. The two new reversible designs proposed here for half adder and full adders are also used in the presented reversible circuits to implement various microoperations. The quantum costs of these designs are comparable. Many of the implemented micro-operations are not seen in previous literatures. The performances of the proposed circuits are compared with existing designs wherever available. (paper)

  17. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  18. The computational implementation of the landscape model: modeling inferential processes and memory representations of text comprehension.

    Science.gov (United States)

    Tzeng, Yuhtsuen; van den Broek, Paul; Kendeou, Panayiota; Lee, Chengyuan

    2005-05-01

    The complexity of text comprehension demands a computational approach to describe the cognitive processes involved. In this article, we present the computational implementation of the landscape model of reading. This model captures both on-line comprehension processes during reading and the off-line memory representation after reading is completed, incorporating both memory-based and coherence-based mechanisms of comprehension. The overall architecture and specific parameters of the program are described, and a running example is provided. Several studies comparing computational and behavioral data indicate that the implemented model is able to account for cycle-by-cycle comprehension processes and memory for a variety of text types and reading situations.

  19. Implementing a mainframe packaged pharmacy computer system in a 190-bed hospital.

    Science.gov (United States)

    Dotson, T L

    1986-03-01

    The implementation of a pharmacy computer system in a 190-bed institution is described. A computer system was instituted in the pharmacy department as part of a hospitalwide conversion to an online information system. Planning for implementation began nine months before the actual live date (date of full computerization). Problems in the existing distribution and record-keeping systems that might be eliminated by computerization were identified, and changes in the layout of the pharmacy and department procedures were initiated to prepare for computerization. The events leading to computerization are presented in chronological order, and the advantages and shortcomings of the system are discussed. Because of careful planning, the cooperation of all pharmacy staff members, and frequent assistance from the computer vender, the nine-month conversion to a computerized system proceeded smoothly.

  20. INTEGRATION OF ECONOMIC AND COMPUTER SKILLS AT IMPLEMENTATION OF STUDENTS PROJECT «BUSINESS PLAN PRODUCING IN MICROSOFT WORD»

    OpenAIRE

    Y.B. Samchinska

    2012-01-01

    In the article expedience at implementation of economic specialities by complex students project on Informatics and Computer Sciences is grounded on creation of business plan by modern information technologies, and also methodical recommendations are presented on implementation of this project.

  1. Verifying the error bound of numerical computation implemented in computer systems

    Science.gov (United States)

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  2. Capabilities and Advantages of Cloud Computing in the Implementation of Electronic Health Record.

    Science.gov (United States)

    Ahmadi, Maryam; Aslani, Nasim

    2018-01-01

    With regard to the high cost of the Electronic Health Record (EHR), in recent years the use of new technologies, in particular cloud computing, has increased. The purpose of this study was to review systematically the studies conducted in the field of cloud computing. The present study was a systematic review conducted in 2017. Search was performed in the Scopus, Web of Sciences, IEEE, Pub Med and Google Scholar databases by combination keywords. From the 431 article that selected at the first, after applying the inclusion and exclusion criteria, 27 articles were selected for surveyed. Data gathering was done by a self-made check list and was analyzed by content analysis method. The finding of this study showed that cloud computing is a very widespread technology. It includes domains such as cost, security and privacy, scalability, mutual performance and interoperability, implementation platform and independence of Cloud Computing, ability to search and exploration, reducing errors and improving the quality, structure, flexibility and sharing ability. It will be effective for electronic health record. According to the findings of the present study, higher capabilities of cloud computing are useful in implementing EHR in a variety of contexts. It also provides wide opportunities for managers, analysts and providers of health information systems. Considering the advantages and domains of cloud computing in the establishment of HER, it is recommended to use this technology.

  3. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    Science.gov (United States)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  4. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  5. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  6. Implementing the UCSD PASCAL system on the MODCOMP computer. [deep space network

    Science.gov (United States)

    Wolfe, T.

    1980-01-01

    The implementation of an interactive software development system (UCSD PASCAL) on the MODCOMP computer is discussed. The development of an interpreter for the MODCOMP II and the MODCOMP IV computers, written in MODCOMP II assembly language, is described. The complete Pascal programming system was run successfully on a MODCOMP II and MODCOMP IV under both the MAX II/III and MAX IV operating systems. The source code for an 8080 microcomputer version of the interpreter was used as the design for the MODCOMP interpreter. A mapping of the functions within the 8080 interpreter into MODCOMP II assembly language was the method used to code the interpreter.

  7. Portable tongue-supported human computer interaction system design and implementation.

    Science.gov (United States)

    Quain, Rohan; Khan, Masood Mehmood

    2014-01-01

    Tongue supported human-computer interaction (TSHCI) systems can help critically ill patients interact with both computers and people. These systems can be particularly useful for patients suffering injuries above C7 on their spinal vertebrae. Despite recent successes in their application, several limitations restrict performance of existing TSHCI systems and discourage their use in real life situations. This paper proposes a low-cost, less-intrusive, portable and easy to use design for implementing a TSHCI system. Two applications of the proposed system are reported. Design considerations and performance of the proposed system are also presented.

  8. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  9. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    Science.gov (United States)

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  10. Implementation of SSYST-1 on the GRS computer and first verification calculations

    International Nuclear Information System (INIS)

    Schubert, J.D.; Ullrich, R.

    1981-09-01

    The program system SSYST-1, being developed in Karlsruhe, has been implemented on the AMDAHL-computer together with special modulus for the items eccentric stress and probabilistic analysis. First computations for the REBEKA-3 experiment and other test examples, done to verify the new implementation, showed satisfactory results, especially a good correspondence with measurements for the instant of bursting, the bursting temperature and the difference in temperature on the periphere. Initial difficulties arised from using the model for circumvariable stress and temperature analyses. The reason was, that this modulus is meant for the program SSYST-2, thus its use in SSYST-1 led to interface problems which, however, are resolved now. (orig.) [de

  11. Computational Fluid Dynamics Simulation of Combustion Instability in Solid Rocket Motor : Implementation of Pressure Coupled Response Function

    OpenAIRE

    S. Saha; D. Chakraborty

    2016-01-01

    Combustion instability in solid propellant rocket motor is numerically simulated by implementing propellant response function with quasi steady homogeneous one dimensional formulation. The convolution integral of propellant response with pressure history is implemented through a user defined function in commercial computational fluid dynamics software. The methodology is validated against literature reported motor test and other simulation results. Computed amplitude of pressure fluctuations ...

  12. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  13. Design and implementation of the one-step MSD adder of optical computer.

    Science.gov (United States)

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  14. IMPLEMENTING THE COMPUTER-BASED NATIONAL EXAMINATION IN INDONESIAN SCHOOLS: THE CHALLENGES AND STRATEGIES

    Directory of Open Access Journals (Sweden)

    Heri Retnawati

    2017-12-01

    Full Text Available In line with technological development, the computer-based national examination (CBNE has become an urgent matter as its implementation faces various challenges, especially in developing countries. Strategies in implementing CBNE are thus needed to face the challenges. The aim of this research was to analyse the challenges and strategies of Indonesian schools in implementing CBNE. This research was qualitative phenomenological in nature. The data were collected through a questionnaire and a focus group discussion. The research participants were teachers who were test supervisors and technicians at junior high schools and senior high schools (i.e. Level 1 and 2 and vocational high schools implementing CBNE in Yogyakarta, Indonesia. The data were analysed using the Bogdan and Biklen model. The results indicate that (1 in implementing CBNE, the schools should initially make efforts to provide the electronic equipment supporting it; (2 the implementation of CBNE is challenged by problems concerning the Internet and the electricity supply; (3 the test supervisors have to learn their duties by themselves and (4 the students are not yet familiar with the beneficial use of information technology. To deal with such challenges, the schools employed strategies by making efforts to provide the standard electronic equipment through collaboration with the students’ parents and improving the curriculum content by adding information technology as a school subject.

  15. Implementation of audio computer-assisted interviewing software in HIV/AIDS research.

    Science.gov (United States)

    Pluhar, Erika; McDonnell Holstad, Marcia; Yeager, Katherine A; Denzmore-Nwagbara, Pamela; Corkran, Carol; Fielder, Bridget; McCarty, Frances; Diiorio, Colleen

    2007-01-01

    Computer-assisted interviewing (CAI) has begun to play a more prominent role in HIV/AIDS prevention research. Despite the increased popularity of CAI, particularly audio computer-assisted self-interviewing (ACASI), some research teams are still reluctant to implement ACASI technology because of lack of familiarity with the practical issues related to using these software packages. The purpose of this report is to describe the implementation of one particular ACASI software package, the Questionnaire Development System (QDS; Nova Research Company, Bethesda, MD), in several nursing and HIV/AIDS prevention research settings. The authors present acceptability and satisfaction data from two large-scale public health studies in which they have used QDS with diverse populations. They also address issues related to developing and programming a questionnaire; discuss practical strategies related to planning for and implementing ACASI in the field, including selecting equipment, training staff, and collecting and transferring data; and summarize advantages and disadvantages of computer-assisted research methods.

  16. Implementation of G-Computation on a Simulated Data Set: Demonstration of a Causal Inference Technique

    Science.gov (United States)

    Snowden, Jonathan M.; Rose, Sherri; Mortimer, Kathleen M.

    2011-01-01

    The growing body of work in the epidemiology literature focused on G-computation includes theoretical explanations of the method but very few simulations or examples of application. The small number of G-computation analyses in the epidemiology literature relative to other causal inference approaches may be partially due to a lack of didactic explanations of the method targeted toward an epidemiology audience. The authors provide a step-by-step demonstration of G-computation that is intended to familiarize the reader with this procedure. The authors simulate a data set and then demonstrate both G-computation and traditional regression to draw connections and illustrate contrasts between their implementation and interpretation relative to the truth of the simulation protocol. A marginal structural model is used for effect estimation in the G-computation example. The authors conclude by answering a series of questions to emphasize the key characteristics of causal inference techniques and the G-computation procedure in particular. PMID:21415029

  17. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  18. Implementation of a solution Cloud Computing with MapReduce model

    International Nuclear Information System (INIS)

    Baya, Chalabi

    2014-01-01

    In recent years, large scale computer systems have emerged to meet the demands of high storage, supercomputing, and applications using very large data sets. The emergence of Cloud Computing offers the potentiel for analysis and processing of large data sets. Mapreduce is the most popular programming model which is used to support the development of such applications. It was initially designed by Google for building large datacenters on a large scale, to provide Web search services with rapid response and high availability. In this paper we will test the clustering algorithm K-means Clustering in a Cloud Computing. This algorithm is implemented on MapReduce. It has been chosen for its characteristics that are representative of many iterative data analysis algorithms. Then, we modify the framework CloudSim to simulate the MapReduce execution of K-means Clustering on different Cloud Computing, depending on their size and characteristics of target platforms. The experiment show that the implementation of K-means Clustering gives good results especially for large data set and the Cloud infrastructure has an influence on these results

  19. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  20. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  1. Implementation of a Message Passing Interface into a Cloud-Resolving Model for Massively Parallel Computing

    Science.gov (United States)

    Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve

    2004-01-01

    The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.

  2. Design and implementation of a support platform for distributed mobile computing

    Science.gov (United States)

    Schill, A.; Kummel, S.

    1995-09-01

    With the rapid development of mobile computer systems and mobile communication infrastructures, a broad field of distributed mobile computing is enabled. The paper first discusses these developments in closer detail and summarizes the resulting requirements concerning adequate software support. An application scenario of the service engineering area illustrates specific aspects including bandwidth and location management, dynamic configuration, resource heterogeneity, disconnection, and security. Based on these considerations, a generic software support platform for distributed mobile computing is derived. It addresses several of these aspects by providing application-independent and reusable support services. In particular, it offers a framework for organizing distributed mobile applications into manageable domains, it equips mobile stations with enhanced functionality for location, resource and bandwidth management, and it uses industry standard RPC communication facilities for enhanced portability. The design, implementation and use of the support platform is illustrated based on a specific part of the application, a mobile multimedia e-mail system. Experiences and implementation aspects in this context are particularly emphasized.

  3. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Science.gov (United States)

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  4. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Directory of Open Access Journals (Sweden)

    Iñaki Bildosola

    Full Text Available Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  5. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    Science.gov (United States)

    Huang, Norden Eh (Inventor); Shen, Zheng (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  6. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    Science.gov (United States)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  7. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  8. COMPUTER EVALUATION OF SKILLS FORMATION QUALITY IN THE IMPLEMENTATION OF COMPETENCE-BASED APPROACH TO LEARNING

    Directory of Open Access Journals (Sweden)

    Vitalia A. Zhuravleva

    2014-01-01

    Full Text Available The article deals with the problem of effective organization of skills forming as an important part of the competence approach in education, implemented via educational standards of new generation. The solution of the problem suggests using of computer tools to assess the quality of skills formation and abilities based on the proposed model of the problem. This paper proposes an approach to creating an assessing model of the level of skills formation in knowledge management systems based on mathematical modeling methods. Attention is paid to the evaluation strategy and technology of assessment, which is based on the use of rules of fuzzy mathematics. Algorithmic implementation of the proposed model of evaluation of the quality of skills development is shown as well. 

  9. Dual-Energy Computed Tomography: Physical Principles, Approaches to Scanning, Usage, and Implementation: Part 1.

    Science.gov (United States)

    Forghani, Reza; De Man, Bruno; Gupta, Rajiv

    2017-08-01

    There are increasing applications of dual-energy computed tomography (CT), a type of spectral CT, in neuroradiology and head and neck imaging. In this 2-part review, the fundamental principles underlying spectral CT scanning and the major considerations in implementing this type of scanning in clinical practice are reviewed. In the first part of this 2-part review, the physical principles underlying spectral CT scanning are reviewed, followed by an overview of the different approaches for spectral CT scanning, including a discussion of the strengths and challenges encountered with each approach. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  11. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    International Nuclear Information System (INIS)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.

    2012-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  12. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    Directory of Open Access Journals (Sweden)

    Sergio Ortega

    2010-12-01

    Full Text Available This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes.

  13. A Computationally Efficient and Robust Implementation of the Continuous-Discrete Extended Kalman Filter

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik

    2007-01-01

    We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...... for nonlinear stochastic continuous-discrete time systems is more than two orders of magnitude faster than a conventional implementation. This is of significance in nonlinear model predictive control applications, statistical process monitoring as well as grey-box modelling of systems described by stochastic...

  14. Sensory system for implementing a human-computer interface based on electrooculography.

    Science.gov (United States)

    Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena

    2011-01-01

    This paper describes a sensory system for implementing a human-computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes.

  15. EXPERIMENTAL AND THEORETICAL FOUNDATIONS AND PRACTICAL IMPLEMENTATION OF TECHNOLOGY BRAIN-COMPUTER INTERFACE

    Directory of Open Access Journals (Sweden)

    A. Ya. Kaplan

    2013-01-01

    Full Text Available Technology brain-computer interface (BCI allow saperson to learn how to control external devices via thevoluntary regulation of own EEG directly from the brain without the involvement in the process of nerves and muscles. At the beginning the main goal of BCI was to replace or restore motor function to people disabled by neuromuscular disorders. Currently, the task of designing the BCI increased significantly, more capturing different aspects of life a healthy person. This article discusses the theoretical, experimental and technological base of BCI development and systematized critical fields of real implementation of these technologies.

  16. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  17. Implementation of distributed computing system for emergency response and contaminant spill monitoring

    International Nuclear Information System (INIS)

    Ojo, T.O.; Sterling, M.C.Jr.; Bonner, J.S.; Fuller, C.B.; Kelly, F.; Page, C.A.

    2003-01-01

    The availability and use of real-time environmental data greatly enhances emergency response and spill monitoring in coastal and near shore environments. The data would include surface currents, wind speed, wind direction, and temperature. Model predictions (fate and transport) or forensics can also be included. In order to achieve an integrated system suitable for application in spill or emergency response situations, a link is required because this information exists on many different computing platforms. When real-time measurements are needed to monitor a spill, the use of a wide array of sensors and ship-based post-processing methods help reduce the latency in data transfer between field sampling stations and the Incident Command Centre. The common thread linking all these modules is the Transmission Control Protocol/Internet Protocol (TCP/IP), and the result is an integrated distributed computing system (DCS). The in-situ sensors are linked to an onboard computer through the use of a ship-based local area network (LAN) using a submersible device server. The onboard computer serves as both the data post-processor and communications server. It links the field sampling station with other modules, and is responsible for transferring data to the Incident Command Centre. This link is facilitated by a wide area network (WAN) based on wireless broadband communications facilities. This paper described the implementation of the DCS. The test results for the communications link and system readiness were also included. 6 refs., 2 tabs., 3 figs

  18. New Computationally Cost-Effective Implementation of Online Nesting for a Regional Model

    Science.gov (United States)

    Yoshida, R.; Yamaura, T.; Adachi, S. A.; Nishizawa, S.; Yashiro, H.; Sato, Y.; Tomita, H.

    2015-12-01

    A new cost-effective implementation of online nesting is developed to improve the computational performance, which is important as well as physical performance in a numerical weather prediction and regional climate experiment. For a down-scaling experiment, a nesting system is indispensable component. Online nesting has merits against offline nesting in updating interval of boundary data and un-necessity of intermediate files. However, the computational efficiency of online nesting has not been much evaluated. In the conventional implementation (CVI) of online nesting, the MPI processes are arranged as a single group, and the group manages all of the nested-domains. In the new implementation (NWI), the MPI processes are divided into several groups, and each process group is assigned to each domain. Therefore, there can be almost no idling processes ideally. In addition, the outer domain calculation can be overlapped behind the inner domain calculation. Elapsed time of data transfer from the outer domain to the inner domain also can be hidden behind the inner domain calculation by appropriate assignment of the processes.We applied the NWI to the SCALE model (Nishizawa et al. 2015), which is a regional weather prediction model developed by RIKEN AICS. We evaluated the computational performance of the NWI in the double-nested experiment by using the K computer. The grid numbers (x,y,z) were set as (120, 108, 40) for the outer domain with 7.5 km horizontal grid space, and (180, 162, 60) for the inner domain with 2.5 km horizontal grid space. For the calculation, 90 processes were used in both the CVI and the NWI. In the NWI, the MPI processes were divided into two groups, and assigned to the outer and the inner domains; 9 and 81 processes for the outer and inner domains, respectively. The computational performance was improved 1.2 times in the NWI compared to the CVI. The benefit of the NWI could become larger when domains are multiple nested.

  19. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    Science.gov (United States)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  20. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    Directory of Open Access Journals (Sweden)

    E. A. Panidi

    2016-01-01

    Full Text Available The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes some contemporary concepts and technologies used for distributed data storage and management, which provide combined use of server-side and client-side resources. In particular, the concepts of Cloud Computing, Fog Computing, and Internet of Things, also with Java Web Start, WebRTC and WebTorrent technologies are mentioned. The author's experience is described briefly, which incorporates the number of projects devoted to the development of the portable solutions for geospatial data and GIS software publication on the Global Network.

  1. Framework and implementation for improving physics essential skills via computer-based practice: Vector math

    Science.gov (United States)

    Mikula, Brendon D.; Heckler, Andrew F.

    2017-06-01

    We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with a careful identification of target skills and the study of specific student difficulties with these skills. It then employs computer-based instruction, immediate feedback, mastery grading, and well-researched principles from cognitive psychology such as interleaved training sequences and distributed practice. We implemented this with more than 1500 students over 2 semesters. Students completed the mastery practice for an average of about 13 min /week , for a total of about 2-3 h for the whole semester. Results reveal large (>1 SD ) pretest to post-test gains in accuracy in vector skills, even compared to a control group, and these gains were retained at least 2 months after practice. We also find evidence of improved fluency, student satisfaction, and that awarding regular course credit results in higher participation and higher learning gains than awarding extra credit. In all, we find that simple computer-based mastery practice is an effective and efficient way to improve a set of basic and essential skills for introductory physics.

  2. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  3. Computational implementation of a tunable multicellular memory circuit for engineered eukaryotic consortia

    Directory of Open Access Journals (Sweden)

    Josep eSardanyés

    2015-10-01

    Full Text Available Cells are complex machines capable of processing information by means of an entangled network ofmolecular interactions. A crucial component of these decision-making systems is the presence of memoryand this is also a specially relevant target of engineered synthetic systems. A classic example of memorydevices is a 1-bit memory element known as the flip-flop. Such system can be in principle designed usinga single-cell implementation, but a direct mapping between standard circuit design and a living circuitcan be cumbersome. Here we present a novel computational implementation of a 1-bit memory deviceusing a reliable multicellular design able to behave as a set-reset flip-flop that could be implemented inyeast cells. The dynamics of the proposed synthetic circuit is investigated with a mathematical modelusing biologically-meaningful parameters. The circuit is shown to behave as a flip-flop in a wide range ofparameter values. The repression strength for the NOT logics is shown to be crucial to obtain a goodflip-flop signal. Our model also shows that the circuit can be externally tuned to achieve different memorystates and dynamics, such as persistent and transient memory. We have characterised the parameterdomains for robust memory storage and retrieval as well as the corresponding time response dynamics.

  4. A META-MODELLING SERVICE PARADIGM FOR CLOUD COMPUTING AND ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    F. Cheng

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:Service integrators seek opportunities to align the way they manage resources in the service supply chain. Many business organisations can operate new, more flexible business processes that harness the value of a service approach from the customer’s perspective. As a relatively new concept, cloud computing and related technologies have rapidly gained momentum in the IT world. This article seeks to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions: service supply chain architecture from a service perspective; the basic clouds of service supply chain; managerial insights into these clouds; and the commercial value of implementing cloud computing. In particular, to show how those services can be used, and involved in their utilisation processes, a hypothetical meta-modelling service of cloud computing is proposed. Moreover, the paper defines the managed cloud architecture for a service vendor or service integrator in the cloud computing infrastructure in the service supply chain: IT services, business services, business processes, which create atomic and composite software services that are used to perform business processes with business service choreographies.

    AFRIKAANSE OPSOMMING: Diensintegreeders is op soek na geleenthede om die bestuur van hulpbronne in die diensketting te belyn. Talle organisasies kan nuwe, meer buigsame besigheidprosesse, wat die waarde van ‘n diensaanslag uit die kliënt se oogpunt inspan, gebruik. As ‘n relatiewe nuwe konsep het wolkberekening en verwante tegnologie vinnig momentum gekry in die IT-wêreld. Die artikel poog om lig te werp op kwessies van die diensketting wat verband hou met wolkberekening deur verskeie verwante vrae te ondersoek: dienkettingargitektuur uit ‘n diensoogpunt; die basiese wolk van die diensketting; bestuursinsigte oor sodanige wolke; en die kommersiële waarde van die implementering van

  5. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, Andreas [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Kreutzer, Moritz [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Galgon, Martin [Bergische Universität Wuppertal (Germany); Fehske, Holger [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Hager, Georg [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Lang, Bruno [Bergische Universität Wuppertal (Germany); Wellein, Gerhard [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need for matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.

  6. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  7. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  8. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2--Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3--Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  9. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    International Nuclear Information System (INIS)

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-01-01

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities

  10. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2--Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  11. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  12. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 - Robust Tools. Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 - Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 - Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  13. Introductory Molecular Orbital Theory: An Honors General Chemistry Computational Lab as Implemented Using Three-Dimensional Modeling Software

    Science.gov (United States)

    Ruddick, Kristie R.; Parrill, Abby L.; Petersen, Richard L.

    2012-01-01

    In this study, a computational molecular orbital theory experiment was implemented in a first-semester honors general chemistry course. Students used the GAMESS (General Atomic and Molecular Electronic Structure System) quantum mechanical software (as implemented in ChemBio3D) to optimize the geometry for various small molecules. Extended Huckel…

  14. INTEGRATION OF ECONOMIC AND COMPUTER SKILLS AT IMPLEMENTATION OF STUDENTS PROJECT «BUSINESS PLAN PRODUCING IN MICROSOFT WORD»

    Directory of Open Access Journals (Sweden)

    Y.B. Samchinska

    2012-07-01

    Full Text Available In the article expedience at implementation of economic specialities by complex students project on Informatics and Computer Sciences is grounded on creation of business plan by modern information technologies, and also methodical recommendations are presented on implementation of this project.

  15. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    International Nuclear Information System (INIS)

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-01-01

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  16. Implementation of Constrained DFT for Computing Charge Transfer Rates within the Projector Augmented Wave Method.

    Science.gov (United States)

    Melander, Marko; Jónsson, Elvar Ö; Mortensen, Jens J; Vegge, Tejs; García Lastra, Juan Maria

    2016-11-08

    Combining constrained density function theory (cDFT) with Marcus theory is an efficient and promising way to address charge transfer reactions. Here, we present a general and robust implementation of cDFT within the projector augmented wave (PAW) framework. PAW pseudopotentials offer a reliable frozen-core electron description across the whole periodic table, with good transferability, as well as facilitate the extraction of all-electron quantities. The present implementation is applicable to two different wave function representations, atomic-centered basis sets (LCAO) and the finite-difference (FD) approximation utilizing real-space grids. LCAO can be used for large systems, molecular dynamics, or quick initialization, while more accurate calculations are achieved with the FD basis. Furthermore, the calculations can be performed with flexible boundary conditions, ranging from isolated molecules to periodic systems in one-, two-, or three-dimensions. As such, this implementation is relevant for a wide variety of applications. We also present how to extract the electronic coupling element and reorganization energy from the resulting diabatic cDFT-PAW wave functions for the parametrization of Marcus theory. Here, the combined method is applied to important test cases where practical implementations of DFT fail due to the self-interaction error, such as the dissociation of the helium dimer cation, and it is compared to other established cDFT codes. Moreover, for charge localization in a diamine cation, where it was recently shown that the commonly used generalized gradient and hybrid functionals of DFT failed to produce the localized state, cDFT produces qualitatively and quantitatively accurate results when benchmarked against self-interaction corrected DFT and high-level CCSD(T) calculations at a fraction of the computational cost.

  17. An implementation of a tree code on a SIMD, parallel computer

    Science.gov (United States)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  18. Design And Implementation Of A Parallel Computer For Expert System Applications

    Science.gov (United States)

    Butler, Philip L.; Allen, John D.; Bouldin, Donald W.

    1988-03-01

    A parallel computer for high-speed execution of expert system programs has been designed and implemented at the Oak Ridge National Laboratory. Programs written in the popular OPS5 language for serial machines need not be modified by the programmer, since the compiler on this special-purpose machine automatically employs the parallelism inherent in the language. Tasks are automatically distributed to parallel rule processors which can evaluate OPS5 rules in parallel. Performance improvements of a factor of 10 over serial machines have already been demonstrated. Enhancements are under way to attain a performance improvement of 100 or more over serial machines for artificial intelligence applications requiring the evaluation of thousands of rules each recognize-act cycle. The initial hardware implementation of the parallel architecture consists of a host computer that broadcasts to 64 parallel rule processors over a transmit-only bus. The communication time is kept to a minimum by using direct-memory access and a memory-mapped addressing scheme that permit each of the parallel rule processors to receive the appropriate information simultaneously. A wired-OR completion flag signals the host whenever all of the parallel rule processors have finished their recognition tasks. The host then extracts information from those rule processors whose rules have been satisfied and, based on a global criterion, selects one of these rules. The host then carries out the actions dictated by this rule and broadcasts new information to the rule processors to begin another recognize-act cycle. Statistics detailing the activities of the host and all of the rule processors are collected and displayed in real time. Thus, the performance of the various aspects of the architecture can be readily analyzed. Also, the execution of the expert system program itself can be studied to detect situations that may be altered to permit additional speedup.

  19. Towards high performance computing for molecular structure prediction using IBM Cell Broadband Engine - an implementation perspective

    Science.gov (United States)

    2010-01-01

    Background RNA structure prediction problem is a computationally complex task, especially with pseudo-knots. The problem is well-studied in existing literature and predominantly uses highly coupled Dynamic Programming (DP) solutions. The problem scale and complexity become embarrassingly humungous to handle as sequence size increases. This makes the case for parallelization. Parallelization can be achieved by way of networked platforms (clusters, grids, etc) as well as using modern day multi-core chips. Methods In this paper, we exploit the parallelism capabilities of the IBM Cell Broadband Engine to parallelize an existing Dynamic Programming (DP) algorithm for RNA secondary structure prediction. We design three different implementation strategies that exploit the inherent data, code and/or hybrid parallelism, referred to as C-Par, D-Par and H-Par, and analyze their performances. Our approach attempts to introduce parallelism in critical sections of the algorithm. We ran our experiments on SONY Play Station 3 (PS3), which is based on the IBM Cell chip. Results Our results suggest that introducing parallelism in DP algorithm allows it to easily handle longer sequences which otherwise would consume a large amount of time in single core computers. The results further demonstrate the speed-up gain achieved in exploiting the inherent parallelism in the problem and also elicits the advantages of using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA. Conclusion The speed-up performance reported here is promising, especially when sequence length is long. To the best of our literature survey, the work reported in this paper is probably the first-of-its-kind to utilize the IBM Cell Broadband Engine (a heterogeneous multi-core chip) to implement a DP. The results also encourage using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA to predict

  20. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    Science.gov (United States)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  1. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology.

    Science.gov (United States)

    Limkin, E J; Sun, R; Dercle, L; Zacharaki, E I; Robert, C; Reuzé, S; Schernberg, A; Paragios, N; Deutsch, E; Ferté, C

    2017-06-01

    Medical image processing and analysis (also known as Radiomics) is a rapidly growing discipline that maps digital medical images into quantitative data, with the end goal of generating imaging biomarkers as decision support tools for clinical practice. The use of imaging data from routine clinical work-up has tremendous potential in improving cancer care by heightening understanding of tumor biology and aiding in the implementation of precision medicine. As a noninvasive method of assessing the tumor and its microenvironment in their entirety, radiomics allows the evaluation and monitoring of tumor characteristics such as temporal and spatial heterogeneity. One can observe a rapid increase in the number of computational medical imaging publications-milestones that have highlighted the utility of imaging biomarkers in oncology. Nevertheless, the use of radiomics as clinical biomarkers still necessitates amelioration and standardization in order to achieve routine clinical adoption. This Review addresses the critical issues to ensure the proper development of radiomics as a biomarker and facilitate its implementation in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  2. THE CONCEPT OF THE EDUCATIONAL COMPUTER MATHEMATICS SYSTEM AND EXAMPLES OF ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    M. Lvov

    2014-11-01

    Full Text Available The article deals with the educational computer mathematics system, based in Kherson State University and resulted in more than 8 software tools to orders of the Ministry of Education, Science, Youth and Sports of Ukraine. The exact and natural sciences are notable among all disciplines both in secondary schools and universities. They form the fundamental scientific knowledge, based on precise mathematical models and methods. The educational process for these courses should include not only lectures and seminars, but active forms of studying as well: practical classes, laboratory work, practical training, etc. The enumerated peculiarities determine specific intellectual and architectural properties of information technologies, developed to be used in the educational process of these disciplines. Whereas, in terms of technologies used in the implementation of the functionality of software, they are actually the educational computer algebra system. Thus the algebraic programming system APS developed in the Institute of Cybernetics of the National Academy of Sciences of Ukraine led by Academician O.A. Letychevskyi in the 80 years of the twentieth century is especially important for their development.

  3. A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation

    Science.gov (United States)

    Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush

    1997-01-01

    Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.

  4. Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

    Directory of Open Access Journals (Sweden)

    Jack Burston

    2014-09-01

    Full Text Available This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2 English ability of incoming students is by means of a computer-based test since such evaluation can be administered quickly, automatically corrected, and the outcome known as soon as the test is completed. While the option of using a commercial CAT is available to institutions with the ability to pay substantial annual fees, or the means of passing these expenses on to their students, language instructors without these resources can only avail themselves of the advantages of CAT evaluation by creating their own tests.  As is demonstrated by the E-CAT project described in this paper, this is a viable alternative even for those lacking any computer programing expertise.  However, language teaching experience and testing expertise are critical to such an undertaking, which requires considerable effort and, above all, collaborative teamwork to succeed. A number of practical skills are also required. Firstly, the operation of a CAT authoring programme must be learned. Once this is done, test makers must master the art of creating a question database and assigning difficulty levels to test items. Lastly, if multimedia resources are to be exploited in a CAT, test creators need to be able to locate suitable copyright-free resources and re-edit them as needed.

  5. Developing a computer delivered, theory based intervention for guideline implementation in general practice

    Directory of Open Access Journals (Sweden)

    Ashworth Mark

    2010-11-01

    Full Text Available Abstract Background Non-adherence to clinical guidelines has been identified as a consistent finding in general practice. The purpose of this study was to develop theory-informed, computer-delivered interventions to promote the implementation of guidelines in general practice. Specifically, our aim was to develop computer-delivered prompts to promote guideline adherence for antibiotic prescribing in respiratory tract infections (RTIs, and adherence to recommendations for secondary stroke prevention. Methods A qualitative design was used involving 33 face-to-face interviews with general practitioners (GPs. The prompts used in the interventions were initially developed using aspects of social cognitive theory, drawing on nationally recommended standards for clinical content. The prompts were then presented to GPs during interviews, and iteratively modified and refined based on interview feedback. Inductive thematic analysis was employed to identify responses to the prompts and factors involved in the decision to use them. Results GPs reported being more likely to use the prompts if they were perceived as offering support and choice, but less likely to use them if they were perceived as being a method of enforcement. Attitudes towards using the prompts were also related to anticipated patient outcomes, individual prescriber differences, accessibility and presentation of prompts and acceptability of guidelines. Comments on the prompts were largely positive after modifying them based on participant feedback. Conclusions Acceptability and satisfaction with computer-delivered prompts to follow guidelines may be increased by working with practitioners to ensure that the prompts will be perceived as valuable tools that can support GPs' practice.

  6. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  7. Examining Behavioral Consultation plus Computer-Based Implementation Planning on Teachers' Intervention Implementation in an Alternative School

    Science.gov (United States)

    Long, Anna C. J.; Sanetti, Lisa M. Hagermoser; Lark, Catherine R.; Connolly, Jennifer J. G.

    2018-01-01

    Students who demonstrate the most challenging behaviors are at risk of school failure and are often placed in alternative schools, in which a primary goal is remediating behavioral and academic concerns to facilitate students' return to their community school. Consistently implemented evidence-based classroom management is necessary toward this…

  8. Socio-Technical Implementation: Socio-technical Systems in the Context of Ubiquitous Computing, Ambient Intelligence, Embodied Virtuality, and the Internet of Things

    NARCIS (Netherlands)

    Nijholt, Antinus; Whitworth, B.; de Moor, A.

    2009-01-01

    In which computer science world do we design and implement our socio-technical systems? About every five or ten years new computer and interaction paradigms are introduced. We had the mainframe computers, the various generations of computers, including the Japanese fifth generation computers, the

  9. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  10. An Evaluation of Interactive Computer Training to Teach Instructors to Implement Discrete Trials with Children with Autism

    Science.gov (United States)

    Pollard, Joy S.; Higbee, Thomas S.; Akers, Jessica S.; Brodhead, Matthew T.

    2014-01-01

    Discrete-trial instruction (DTI) is a teaching strategy that is often incorporated into early intensive behavioral interventions for children with autism. Researchers have investigated time- and cost-effective methods to train staff to implement DTI, including self-instruction manuals, video modeling, and interactive computer training (ICT). ICT…

  11. Implementation of Service Learning and Civic Engagement for Computer Information Systems Students through a Course Project at the Hashemite University

    Science.gov (United States)

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2013-01-01

    Service learning methodologies provide information systems students with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study of integrating a service learning project into an undergraduate Computer Information Systems course titled "Information Systems"…

  12. Staff Perspectives on the Use of a Computer-Based Concept for Lifestyle Intervention Implemented in Primary Health Care

    Science.gov (United States)

    Carlfjord, Siw; Johansson, Kjell; Bendtsen, Preben; Nilsen, Per; Andersson, Agneta

    2010-01-01

    Objective: The aim of this study was to evaluate staff experiences of the use of a computer-based concept for lifestyle testing and tailored advice implemented in routine primary health care (PHC). Design: The design of the study was a cross-sectional, retrospective survey. Setting: The study population consisted of staff at nine PHC units in the…

  13. Implementing the Flipped Classroom Methodology to the Subject "Applied Computing" of Two Engineering Degrees at the University of Barcelona

    Science.gov (United States)

    Iborra Urios, Montserrat; Ramírez Rangel, Eliana; Badia Córcoles, Jordi Hug; Bringué Tomàs, Roger; Tejero Salvador, Javier

    2017-01-01

    This work is focused on the implementation, development, documentation, analysis, and assessment of the flipped classroom methodology, by means of the just-in-time teaching strategy, for a pilot group (1 out of 6) in the subject "Applied Computing" of both the Chemical and Materials Engineering Undergraduate Degrees of the University of…

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  15. Implementation of depression screening in antenatal clinics through tablet computers: results of a feasibility study.

    Science.gov (United States)

    Marcano-Belisario, José S; Gupta, Ajay K; O'Donoghue, John; Ramchandani, Paul; Morrison, Cecily; Car, Josip

    2017-05-10

    Mobile devices may facilitate depression screening in the waiting area of antenatal clinics. This can present implementation challenges, of which we focused on survey layout and technology deployment. We assessed the feasibility of using tablet computers to administer a socio-demographic survey, the Whooley questions and the Edinburgh Postnatal Depression Scale (EPDS) to 530 pregnant women attending National Health Service (NHS) antenatal clinics across England. We randomised participants to one of two layout versions of these surveys: (i) a scrolling layout where each survey was presented on a single screen; or (ii) a paging layout where only one question appeared on the screen at any given time. Overall, 85.10% of eligible pregnant women agreed to take part. Of these, 90.95% completed the study procedures. Approximately 23% of participants answered Yes to at least one Whooley question, and approximately 13% of them scored 10 points of more on the EPDS. We observed no association between survey layout and the responses given to the Whooley questions, the median EPDS scores, the number of participants at increased risk of self-harm, and the number of participants asking for technical assistance. However, we observed a difference in the number of participants at each EPDS scoring interval (p = 0.008), which provide an indication of a woman's risk of depression. A scrolling layout resulted in faster completion times (median = 4 min 46 s) than a paging layout (median = 5 min 33 s) (p = 0.024). However, the clinical significance of this difference (47.5 s) is yet to be determined. Tablet computers can be used for depression screening in the waiting area of antenatal clinics. This requires the careful consideration of clinical workflows, and technology-related issues such as connectivity and security. An association between survey layout and EPDS scoring intervals needs to be explored further to determine if it corresponds to a survey layout effect

  16. Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy

    Science.gov (United States)

    Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean

    2007-01-01

    In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…

  17. Computer Games in Pre-School Settings: Didactical Challenges when Commercial Educational Computer Games Are Implemented in Kindergartens

    Science.gov (United States)

    Vangsnes, Vigdis; Gram Okland, Nils Tore; Krumsvik, Rune

    2012-01-01

    This article focuses on the didactical implications when commercial educational computer games are used in Norwegian kindergartens by analysing the dramaturgy and the didactics of one particular game and the game in use in a pedagogical context. Our justification for analysing the game by using dramaturgic theory is that we consider the game to be…

  18. Short-term effects of implemented high intensity shoulder elevation during computer work

    DEFF Research Database (Denmark)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal

    2009-01-01

    contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction...... on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. METHODS: 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder....... RESULTS: The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius...

  19. Multiple implementation of a reactor protection code in PHI2, PASCAL, and IFTRAN on the SIEMENS-330 computer

    International Nuclear Information System (INIS)

    Gmeiner, L.; Lemperle, W.; Voges, U.

    1978-01-01

    In safety related computer applications, as in the case of a reactor protection system considered here, mostly multi-computer systems are necessary for reasons of reliability and availability. The hardware structure of the protection system and the software requierements derived from it are explained. In order to study the effects of diversified programming of the three computers the protection codes were implemented in the languages IFTRAN, PASCAL, and PHI2. According to the experience gained diversified programming seems to be a proper means to prevent identical programming errors in all three computers on one hand and to detect ambiguities of the specification on the other. During all of the experiment the errors occurring were recorded in detail and at the moment are being evaluated. (orig./WB) [de

  20. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  1. Implementation of Water Quality Management by Fish School Detection Based on Computer Vision Technology

    OpenAIRE

    Yan Hou

    2015-01-01

    To solve the detection of abnormal water quality, this study proposed a biological water abnormity detection method based on computer vision technology combined with Support Vector Machine (SVM). First, computer vision is used to acquire the parameters of fish school motion feature which can reflect the water quality and then these parameters were preprocessed. Next, the sample set is established and the water quality abnormity monitoring model based on computer vision technology combined wit...

  2. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    Science.gov (United States)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  3. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    Science.gov (United States)

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  4. Evaluating the Implementation of International Computing Curricular in African Universities: A Design-Reality Gap Approach

    Science.gov (United States)

    Dasuki, Salihu Ibrahim; Ogedebe, Peter; Kanya, Rislana Abdulazeez; Ndume, Hauwa; Makinde, Julius

    2015-01-01

    Efforts are been made by Universities in developing countries to ensure that it's graduate are not left behind in the competitive global information society; thus have adopted international computing curricular for their computing degree programs. However, adopting these international curricula seem to be very challenging for developing countries…

  5. Implementation Proposal of Computer-Based Office Automation for Republic of Korea Army Intelligence Corps (ROKAIC).

    Science.gov (United States)

    1987-03-01

    Plannin & Support& Research& Adminisration management, Deveopment Figure 4.2 The organization chart of computer center. connects with 15 CRT terminals... organization chart of computer center ............................ 53 4.3 The configuration o1" -omputer equipments ............................ 54 4.4 Work...network ..................................................... 18 4.1 ROKAIC Organizational Chart ..................................... 52 4.2 The

  6. Successful Implementation of a Computer-Supported Collaborative Learning System in Teaching E-Commerce

    Science.gov (United States)

    Ngai, E. W. T.; Lam, S. S.; Poon, J. K. L.

    2013-01-01

    This paper describes the successful application of a computer-supported collaborative learning system in teaching e-commerce. The authors created a teaching and learning environment for 39 local secondary schools to introduce e-commerce using a computer-supported collaborative learning system. This system is designed to equip students with…

  7. Incremental cost of department-wide implementation of a picture archiving and communication system and computed radiography.

    Science.gov (United States)

    Pratt, H M; Langlotz, C P; Feingold, E R; Schwartz, J S; Kundel, H L

    1998-01-01

    To determine the incremental cash flows associated with department-wide implementation of a picture archiving and communication system (PACS) and computed radiography (CR) at a large academic medical center. The authors determined all capital and operational costs associated with PACS implementation during an 8-year time horizon. Economic effects were identified, adjusted for time value, and used to calculate net present values (NPVs) for each section of the department of radiology and for the department as a whole. The chest-bone section used the most resources. Changes in cost assumptions for the chest-bone section had a dominant effect on the department-wide NPV. The base-case NPV (i.e., that determined by using the initial assumptions) was negative, indicating that additional net costs are incurred by the radiology department from PACS implementation. PACS and CR provide cost savings only when a 12-year hardware life span is assumed, when CR equipment is removed from the analysis, or when digitized long-term archives are compressed at a rate of 10:1. Full PACS-CR implementation would not provide cost savings for a large, subspecialized department. However, institutions that are committed to CR implementation (for whom CR implementation would represent a sunk cost) or institutions that are able to archive images by using image compression will experience cost savings from PACS.

  8. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  9. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  10. Tensor Arithmetic, Geometric and Mathematic Principles of Fluid Mechanics in Implementation of Direct Computational Experiments

    Directory of Open Access Journals (Sweden)

    Bogdanov Alexander

    2016-01-01

    Full Text Available The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.

  11. The Pitzer-Lee-Kesler-Teja (PLKT) Strategy and Its Implementation by Meta-Computing Software

    Czech Academy of Sciences Publication Activity Database

    Smith, W. R.; Lísal, Martin; Missen, R. W.

    2001-01-01

    Roč. 35, č. 1 (2001), s. 68-73 ISSN 0009-2479 Institutional research plan: CEZ:AV0Z4072921 Keywords : The Pitzer-Lee-Kesler-Teja (PLKT) strategy * implementation Subject RIV: CF - Physical ; Theoretical Chemistry

  12. Implementation of computer-based patient records in primary care: the societal health economic effects.

    OpenAIRE

    Arias-Vimárlund, V.; Ljunggren, M.; Timpka, T.

    1996-01-01

    OBJECTIVE: Exploration of the societal health economic effects occurring during the first year after implementation of Computerised Patient Records (CPRs) at Primary Health Care (PHC) centres. DESIGN: Comparative case studies of practice processes and their consequences one year after CPR implementation, using the constant comparison method. Application of transaction-cost analyses at a societal level on the results. SETTING: Two urban PHC centres under a managed care contract in Ostergötland...

  13. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    OpenAIRE

    E. A. Panidi

    2016-01-01

    The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes...

  14. Computer simulation of processes and work implementation zones at Ukryttya object

    International Nuclear Information System (INIS)

    Klyuchnikov, A.A.; Rud'ko, V.M.; Batij, V.G.; Pavlovskij, L.I.; Podbereznyj, S.S.

    2004-01-01

    Need of wide application of computing graphics is grounded during conversion of Ukryttya object into an ecologically safe system, and some examples are given of its use during the design of project for stabilization of Ukryttya object building structures

  15. Two-Language, Two-Paradigm Introductory Computing Curriculum Model and its Implementation

    OpenAIRE

    Zanev, Vladimir; Radenski, Atanas

    2011-01-01

    This paper analyzes difficulties with the introduction of object-oriented concepts in introductory computing education and then proposes a two-language, two-paradigm curriculum model that alleviates such difficulties. Our two-language, two-paradigm curriculum model begins with teaching imperative programming using Python programming language, continues with teaching object-oriented computing using Java, and concludes with teaching object-oriented data structures with Java.

  16. Implementation proposal of computer-based office automation for Republic of Korea Army Intelligence Corps. (ROKAIC)

    OpenAIRE

    Joo, Dae Joon

    1987-01-01

    Approved for public release; distribution is unlimited The availability of computer technology and its continually declining costs has led to its application in the office environment. The use of computer and micro electronics in the office for the support of .secretarial and managerial staff has been given a number of titles, the most common term being "Office Automation" (OAV OA is a working environment that brings together a useful combination of flexible and conveniently...

  17. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  18. The Implementation of Computer Platform for Foundries Cooperating in a Supply Chain

    Directory of Open Access Journals (Sweden)

    Wilk-Kołodziejczyk D.

    2014-08-01

    Full Text Available This article presents a practical solution in the form of implementation of agent-based platform for the management of contracts in a network of foundries. The described implementation is a continuation of earlier scientific work in the field of design and theoretical system specification for cooperating companies [1]. The implementation addresses key design assumptions - the system is implemented using multi-agent technology, which offers the possibility of decentralisation and distributed processing of specified contracts and tenders. The implemented system enables the joint management of orders for a network of small and medium-sized metallurgical plants, while providing them with greater competitiveness and the ability to carry out large procurements. The article presents the functional aspects of the system - the user interface and the principle of operation of individual agents that represent businesses seeking potential suppliers or recipients of services and products. Additionally, the system is equipped with a bi-directional agent translating standards based on ontologies, which aims to automate the decision-making process during tender specifications as a response to the request.

  19. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  20. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    Science.gov (United States)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  1. The design, marketing, and implementation of online continuing education about computers and nursing informatics.

    Science.gov (United States)

    Sweeney, Nancy M; Saarmann, Lembi; Seidman, Robert; Flagg, Joan

    2006-01-01

    Asynchronous online tutorials using PowerPoint slides with accompanying audio to teach practicing nurses about computers and nursing informatics were designed for this project, which awarded free continuing education units to completers. Participants had control over the advancement of slides, with the ability to repeat when desired. Graphics were kept to a minimum; thus, the program ran smoothly on computers using dial-up modems. The tutorials were marketed in live meetings and through e-mail messages on nursing listservs. Findings include that the enrollment process must be automated and instantaneous, the program must work from every type of computer and Internet connection, marketing should be live and electronic, and workshops should be offered to familiarize nurses with the online learning system.

  2. The Geospatial Data Cloud: An Implementation of Applying Cloud Computing in Geosciences

    Directory of Open Access Journals (Sweden)

    Xuezhi Wang

    2014-11-01

    Full Text Available The rapid growth in the volume of remote sensing data and its increasing computational requirements bring huge challenges for researchers as traditional systems cannot adequately satisfy the huge demand for service. Cloud computing has the advantage of high scalability and reliability, which can provide firm technical support. This paper proposes a highly scalable geospatial cloud platform named the Geospatial Data Cloud, which is constructed based on cloud computing. The architecture of the platform is first introduced, and then two subsystems, the cloud-based data management platform and the cloud-based data processing platform, are described.  ––– This paper was presented at the First Scientific Data Conference on Scientific Research, Big Data, and Data Science, organized by CODATA-China and held in Beijing on 24-25 February, 2014.

  3. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  4. The European Patent Office and its handling of Computer Implemented Inventions

    CERN Multimedia

    CERN. Geneva; Weber, Georg

    2014-01-01

    Georg Weber joined the EPO in 1988 and is director since more than 10 years. He started his career in the office initially as a patent examiner and worked in different technical areas of chemistry and mechanics. Birger Koblitz is patent examiner at the EPO in Munich in the technical field of computer security. Before joining the office in 2009, he earned a PhD in Experimental Particle Physics from the University of Hamburg, and worked at CERN in the IT department supporting the experiments in their Grid Computing activitie...

  5. Implementation of active electrodes on a brain-computer interface and its application as P300 speller

    International Nuclear Information System (INIS)

    Aguero Rojas, Eliecer

    2013-01-01

    A brain computer interface has implemented using open hardware called Modular EEG, created by The OpenEEG Project and distributed by the company Olimex Ltd. That hardware is modified to use active electrodes, instead of passive electrodes, for acquiring electroencephalographic signals. The application has been given to the interface has been a speller P300; for which has used the BC12000 open software that has the necessary configuration for the application. P300 speller has used a protocol in each session so that could be standardize the method to different users. Valuing the results with three neuropsychological tests, was within the objectives; however, has not been achieved by the limitation in time of project implementation. A brain computer interface has been used with passive electrodes; implemented in the same way that the BCI with active electrodes; and has worked better than the interface with active electrodes. One of the major advantages that has been observed of passive electrodes on the actives has been the size of the same, because the liabilities are smaller and therefore, easier to place preventing the hair of the user, which increases the noise in the signal. (author) [es

  6. Implementing a low-latency parallel graphic equalizer with heterogeneous computing

    NARCIS (Netherlands)

    Norilo, Vesa; Verstraelen, Martinus Johannes Wilhelmina; Valimaki, Vesa; Svensson, Peter; Kristiansen, Ulf

    2015-01-01

    This paper describes the implementation of a recently introduced parallel graphic equalizer (PGE) in a heterogeneous way. The control and audio signal processing parts of the PGE are distributed to a PC and to a signal processor, of WaveCore architecture, respectively. This arrangement is

  7. Implementation of Constrained DFT for Computing Charge Transfer Rates within the Projector Augmented Wave Method

    DEFF Research Database (Denmark)

    Melander, Marko; Jónsson, Elvar Örn; Mortensen, Jens Jørgen

    2016-01-01

    molecules to periodic systems in one-, two-, or three-dimensions. As such, this implementation is relevant for a wide variety of applications. We also present how to extract the electronic coupling element and reorganization energy from the resulting diabatic cDFT-PAW wave functions for the parametrization...

  8. Design Considerations for Implementing a Shipboard Computer Supported Command Management System

    Science.gov (United States)

    1976-06-01

    considerations that must also be taken into account when selecting a system. Reference 1 provides a comprehensive checklist for utilization in system...In Implementing a Data Processing System Un a Hnicompu^er," Hashers Tresis, "WEar^cn "School or Finance an^Tommerce, 1974. 16. Sperry Onivac, Dse of

  9. Toward Implementing Computer-Assisted Foreign Language Assessment in the Official Spanish University Entrance Examination

    Science.gov (United States)

    Sanz, Ana Gimeno; Pavón, Ana Sevilla

    2015-01-01

    In 2008 the Spanish Government announced the inclusion of an oral section in the foreign language exam of the National University Entrance Examination during the year 2012 (Royal Decree 1892/2008, of 14 November 2008, Ministerio de Educación, Gobierno de España, 2008). Still awaiting the implementation of these changes, and in an attempt to offer…

  10. Design, Implementation, and Characterization of a Dedicated Breast Computed Mammotomography System for Enhanced Lesion Imaging

    National Research Council Canada - National Science Library

    McKinley, Randolph L

    2006-01-01

    .... Half cone-beam orbits have been implemented and investigated and have indicated they are feasible for a wide range of breast sizes. Future studies will focus on characterizing the system in terms of dose efficiency, contrast sensitivity, and evaluation for a range of breast sizes and compositions. Patient bed optimization will also be investigated.

  11. Computational efficiency of numerical approximations of tangent moduli for finite element implementation of a fiber-reinforced hyperelastic material model.

    Science.gov (United States)

    Liu, Haofei; Sun, Wei

    2016-01-01

    In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.

  12. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  13. A Framework and Implementation of User Interface and Human-Computer Interaction Instruction

    Science.gov (United States)

    Peslak, Alan

    2005-01-01

    Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…

  14. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    International Nuclear Information System (INIS)

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  15. 76 FR 36986 - Export Controls for High Performance Computers: Wassenaar Arrangement Agreement Implementation...

    Science.gov (United States)

    2011-06-24

    ... and Technologies (Wassenaar List) maintained and agreed to by governments participating in the... Control Classification Number (ECCN) 4A003. These changes agreed to at the Plenary pertain to raising the Adjusted Peak Performance (APP) for digital computers in ECCN 4A003. In accordance with the National...

  16. Design and implementation of an integrated computer working environment for doing mathematics and science

    NARCIS (Netherlands)

    Heck, A.; Kedzierska, E.; Ellermeijer, T.

    2009-01-01

    In this paper we report on the sustained research and development work at the AMSTEL Institute of the University of Amsterdam to improve mathematics and science education at primary and secondary school level, which has lead amongst other things to the development of the integrated computer working

  17. Research and realization implementation of monitor technology on illegal external link of classified computer

    Science.gov (United States)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  18. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  19. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    Science.gov (United States)

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  20. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Phillips, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wampler, Cheryl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meisner, Robert [National Nuclear Security Administration (NNSA), Washington, DC (United States)

    2010-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality, and scientific details); to quantify critical margins and uncertainties; and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  1. Implementation of process surety improvements at the ICPP through the Process Monitoring Computer System

    International Nuclear Information System (INIS)

    Dahl, C.A.

    1989-01-01

    The Process Monitoring Computer System (PMCS) at the Idaho Chemical Processing Plant (ICPP) is a system of data acquisition devices which acquire and transmit process data to a computer system for processing and storage. These signals are in the form of analog (continuous) and digital (discrete) data from the existing process instrumentation and specialty sensors installed on the plant equipment. This system, initially an experiment in the remote Safeguards analysis of an operating facility, was a retrofit installation to the plant which was constructed in the 1950's. The PMCS monitors the ICPP Fuel Process Operations which consist of various headend dissolutions, three solvent extraction cycles, and a fluidized bed denitration process. While the interactive analysis of the process data is an important and demonstrably useful feature of the system, several important operating concerns are addressed through the use of advisory programs which act on the process data to provide information to the process operators. These programs have all been designed to increase the operational surety of the ICPP and to take full advantage of the power of a modern digital computer system for the placement of maximum process information in the hands of the process operator. The use of process computer technology at the ICPP has shown that when such information becomes routinely available, it is possible to construct meaningful, useful systems on the computers to alleviate operating concerns such as inadvertent transfers, offer valid process operating advice, and aid in attempts to eliminate unneeded process shutdowns due to lack of feedstocks and misinterpretation of the process data

  2. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive

  3. Improving the accessibility at home: implementation of a domotic application using a p300-based brain computer interface system

    Directory of Open Access Journals (Sweden)

    Rebeca Corralejo Palacios

    2012-05-01

    Full Text Available The aim of this study was to develop a Brain Computer Interface (BCI application to control domotic devices usually present at home. Previous studies have shown that people with severe disabilities, both physical and cognitive ones, do not achieve high accuracy results using motor imagery-based BCIs. To overcome this limitation, we propose the implementation of a BCI application using P300 evoked potentials, because neither extensive training nor extremely high concentration level are required for this kind of BCIs. The implemented BCI application allows to control several devices as TV, DVD player, mini Hi-Fi system, multimedia hard drive, telephone, heater, fan and lights. Our aim is that potential users, i.e. people with severe disabilities, are able to achieve high accuracy. Therefore, this domotic BCI application is useful to increase

  4. Design and Implementation of 32-Bit Controller For Interfacing with reconfigurable computing systems.

    OpenAIRE

    Ashutosh Gupta; Kota Solomon Raju

    2009-01-01

    Partial reconfiguration allows time-sharing of physical resources for the execution of multiple functionalmodules by swapping in or out at run-time without incurring any system downtime. This results indramatically increase in speed and functionality of FPGA based system. This paper presents thedesigning an interface controller through UART for execution & implementation of reconfigurablemodules (RM) on Xilinx Virtex-4(XC4VFX12), (XC4VFX20) and (XC4VFX60) devices. To verify partialreconfigura...

  5. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    JIANG Lan

    2013-02-01

    Full Text Available A kind of anti-saturated digital PI regulator is designed and implemented based on DSP.This PI regulator was applied to the system design of voltage and current double-loop control in a BUCK converter and related experimental research was made in a 5.5 KW prototype machine.Experimental results show that the converter has good static and dynamic performances and the validity of the design of the PI regulator is verified.

  6. A feasibility study of implementing a bring-your-own-computing-device policy

    OpenAIRE

    Carideo, Jeffrey W.; Walker, Timothy P.; Williams, Jason C.

    2013-01-01

    Approved for public release; distribution is unlimited. Our team conducted an information technology study on the feasibility of reduction of hardware and software procurement expenditures at the Naval Postgraduate School, Graduate School of Business and Public Policy (GSBPP). The objectives were to calculate the total cost of the GSBPPs current expenditures, develop alternative hardware and software procurement plans, and measure these costs against the alternative plan of implementing a ...

  7. IMPLEMENTATION STRATEGY OF FREE SOFTWARE IN THE PROCESS OF PREPARATION OF TEACHERS OF MATHEMATICS, PHYSICS AND COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    Vladyslav Ye. Velychko

    2016-01-01

    Full Text Available Information processes in the society encourage the formation of a revision of the forms and methods of learning; involve the use of didactic capabilities of information and communication technologies in teaching. No less important in this context, the problem of professionals training who are able to use modern possibilities of computer technology. Training of highly qualified teachers is only possible using advanced technologies that cover the entire range of existing opportunities. The analysis used in the formation of the software has showed insufficient use of a whole class of software - free software in the educational process. To overcome this problem, the proposed implementation strategy of free software in the preparation of teachers of mathematics, physics and computer science is proposed.

  8. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrickson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  9. Implementation of a pressurized water reactor simulator for teaching on a mini-computer

    International Nuclear Information System (INIS)

    Tallec, Michele.

    1982-06-01

    This paper presents the design of a pressurized water reactor power plant simulator using a mini-computer. This simulator is oriented towards teaching. It operates real-time simulations and many parameters can be changed by the student during execution of the digital code. First, a state variable model of the dynamic behavior of the plant is derived from the physical laws. The second part presents the problems associated with the use of a mini-computer for the resolution of a large differential system, notably the problems of memory-space availability, execution time and numerical integration. Finally, it contains the description of the control deck outlay used to interfer with the digital code, and of the the conditions that can be changed during an excution [fr

  10. Object-Oriented Implementation of the Finite-Difference Time-Domain Method in Parallel Computing Environment

    Science.gov (United States)

    Chun, Kyungwon; Kim, Huioon; Hong, Hyunpyo; Chung, Youngjoo

    GMES which stands for GIST Maxwell's Equations Solver is a Python package for a Finite-Difference Time-Domain (FDTD) simulation. The FDTD method widely used for electromagnetic simulations is an algorithm to solve the Maxwell's equations. GMES follows Object-Oriented Programming (OOP) paradigm for the good maintainability and usability. With the several optimization techniques along with parallel computing environment, we could make the fast and interactive implementation. Execution speed has been tested in a single host and Beowulf class cluster. GMES is open source and available on the web (http://www.sf.net/projects/gmes).

  11. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    Energy Technology Data Exchange (ETDEWEB)

    Burns, T.J.

    1994-03-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

  12. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    International Nuclear Information System (INIS)

    Burns, T.J.

    1994-01-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed

  13. Implementation of the EM Algorithm in the Estimation of Item Parameters: The BILOG Computer Program.

    Science.gov (United States)

    Mislevy, Robert J.; Bock, R. Darrell

    This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…

  14. On a concept of computer game implementation based on a temporal logic

    Science.gov (United States)

    Szymańska, Emilia; Adamek, Marek J.; Mulawka, Jan J.

    2017-08-01

    Time is a concept which underlies all the contemporary civilization. Therefore, it was necessary to create mathematical tools that allow a precise way to describe the complex time dependencies. One such tool is temporal logic. Its definition, description and characteristics will be presented in this publication. Then the authors will conduct a discussion on the usefulness of this tool in context of creating storyline in computer games such as RPG genre.

  15. Information Security: Federal Guidance Needed to Address Control Issues With Implementing Cloud Computing

    Science.gov (United States)

    2010-05-01

    software as a service . The deployment models relate to how the cloud service is provided. They include a private cloud, operated solely for an organization; a community cloud, shared by several organizations; and a public cloud, available to any paying customer. Cloud computing can both increase and decrease the security of information systems in federal agencies. Potential information security benefits include those related to the use of virtualization, such as faster deployment of patches, and from economies of scale, such

  16. Review of the Experimental Background and Implementation of Computational Models of the Ocular Lens Microcirculation.

    Science.gov (United States)

    Wu, Ho-Ting D; Donaldson, Paul J; Vaghefi, Ehsan

    2016-01-01

    Our sense of vision is critically dependent on the clarity of the crystalline lens. The most common cause of transparency loss in the lens is age-related nuclear cataract, which is due to accumulative oxidative damage to this tissue. Since the ocular lens is an avascular tissue, it has to maintain its physiological homeostasis and antioxidant levels using a system of water microcirculation. This system has been experimentally imaged in animal lenses using different modalities. Based on these data, computational models have been developed to predict the properties of this system in human lenses and its changes due to aging. Although successful in predicting many aspects of lens fluid dynamics, at least in animal models, these in-silica models still need further improvement to become more accurate and representative of human ocular lens. We have been working on gathering experimental data and simultaneously developing computational models of lens microcirculation for the past decade. This review chronologically looks at the development of data-driven computational foundations of lens microcirculation model, its current state, and future advancement directions. A comprehensive model of lens fluid dynamics is essential to understand the physiological optics of this tissue and ultimately the underlying mechanisms of cataract onset and progression.

  17. Implementation of Private Cloud Computing Using Integration of JavaScript and Python

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper deals with the design and deployment of a novel library class in Python, enabling the use of JavaScript functionalities in Application Programming and the leveraging of this Library into development for third generation technologies such as Private Cloud Computing. The integration of these two prevalent languages provides us with a new level of compliance which helps in developing an understanding between Web Programming and Application Programming. An inter-browser functionality wrapping, which would enable users to have a JavaScript experience in Python interfaces directly, without having to depend on external programs, has been developed. The functionality of this concept is prevalent in the fact that Applications written in JavaScript and accessed on the browser now have the capability of interacting with each other on a common platform with the help of a Python wrapper. The idea is demonstrated by the integrating with the now ubiquitous Cloud Computing concept. With the help of examples, we have showcased the same and explained how the Library XOCOM can be a stepping stone to flexible cloud computing environment.

  18. A Methodology for Decision Support for Implementation of Cloud Computing IT Services

    Directory of Open Access Journals (Sweden)

    Adela Tušanová

    2014-07-01

    Full Text Available The paper deals with the decision of small and medium-sized software companies in transition to SaaS model. The goal of the research is to design a comprehensive methodic to support decision making based on actual data of the company itself. Based on a careful analysis, taxonomy of costs, revenue streams and decision-making criteria are proposed in the paper. On the basis of multi-criteria decision-making methods, each alternative is evaluated and the alternative with the highest score is identified as the most appropriate. The proposed methodic is implemented as a web application and verified through  case studies.

  19. Research and implementation of PC data synchronous backup based on cloud computing

    OpenAIRE

    WU Yu; CHEN Junhua

    2013-01-01

    In order to better ensure data security,data integrity,and facilitate remote management,this paper has designed and implemented a system model for PC data synchronous backup from the view of the local database and personal data. It focuses on the data backup and uses SQL Azure( a cloud database management system) and Visual Studio( a development platform tool) . Also the system is released and deployed on the Windows Azure Platform with a unique web portal. Experimental tests show that compar...

  20. Smart learning objects for smart education in computer science theory, methodology and robot-based implementation

    CERN Document Server

    Stuikys, Vytautas

    2015-01-01

    This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist a

  1. Method, systems, and computer program products for implementing function-parallel network firewall

    Science.gov (United States)

    Fulp, Errin W [Winston-Salem, NC; Farley, Ryan J [Winston-Salem, NC

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  2. DESIGN AND IMPLEMENTATION OF REGIONAL MEDICAL INFORMATICS SYSTEM WITH USE OF CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Alexey A. Ponomarev

    2013-01-01

    Full Text Available The article deals with the situation in the market of healthcare information systems in Russia and with legislative preconditions of development in this sphere. The task of creation of regional information system is highlighted. On the basis of analysis of approaches and foreign experience the way of realization of a regional segment in the state system through the regional healthcare portal with the application of cloud computing was offered. The developed module «Electronic Registry» is discussed as an example of practical realization.

  3. Design and Implementation of 3 Axis CNC Router for Computer Aided Manufacturing Courses

    Directory of Open Access Journals (Sweden)

    Aktan Mehmet Emin

    2016-01-01

    Full Text Available In this paper, it is intended to make the mechanical design of 3 axis Computer Numerical Control (CNC router with linear joints, production of electronic control interface cards and drivers and manufacturing of CNC router system which is a combination of mechanics and electronics. At the same time, interface program has been prepared to control router via USB. The router was developed for educational purpose. In some vocational schools and universities, Computer Aided Manufacturing (CAM courses are though rather theoretical. This situation cause ineffective and temporary learning. Moreover, students at schools which have the opportunity to apply for these systems can face with various dangerous accidents. Because of this situation, these students start to get knowledge about this system for the first time. For the first steps of CNC education, using smaller and less dangerous systems will be easier. A new concept CNC machine and its user interface suitable and profitable for education have been completely designed and realized during this study. To test the validity of the hypothesis which the benefits that may exist on the educational life, enhanced traditional education method with the contribution of the designed machine has been practiced on CAM course students for a semester. At the end of the semester, the new method applied students were more successful in the rate of 27.36 percent both in terms of verbal comprehension and exam grades.

  4. Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation

    Science.gov (United States)

    Pope, S. B.

    1997-03-01

    A computational technique is described and demonstrated that can decrease by three orders of magnitude the computer time required to treat detailed chemistry in reactive flow calculations. The method is based on the in situ adaptive tabulation (ISAT) of the accessed region of the composition space - the adaptation being to control the tabulation errors. Test calculations are performed for non-premixed methane - air combustion in a statistically-homogeneous turbulent reactor, using a kinetic mechanism with 16 species and 41 reactions. The results show excellent control of the tabulation errors with respect to a specified error tolerance; and a speed-up factor of about 1000 is obtained compared to the direct approach of numerically integrating the reaction equations. In the context of PDF methods, the ISAT technique makes feasible the use of detailed kinetic mechanisms in calculations of turbulent combustion. The technique can also be used with reduced mechanisms, and in other approaches for calculating reactive flows (e.g. finite difference methods).

  5. Abstract machine based execution model for computer architecture design and efficient implementation of logic programs in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Hermenegildo, M.V.

    1986-01-01

    The term Logic Programming refers to a variety of computer languages and execution models based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in artificial intelligence, knowledge-based systems, and many other areas of computing. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an Abstract Machine level, suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and, therefore, the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set.

  6. Single-chip correlator implementation for PCI-bus personal computers

    Science.gov (United States)

    O'Callaghan, Michael J.; Perlmutter, Stephen H.; Wolt, Barry

    2000-03-01

    We have previously reported on the design and operation of a novel single-chip optical correlator prototype. Two ferroelectric liquid crystal SLMs and a high-speed APS camera were built into a single CMOS integrated circuit. Diffractive Fourier transform lenses were fabricated onto the surface of a window which was mounted on top of the chip. We are now working towards implementing the correlator as a business card-sized module mounted on a PCI card which can be plugged into the motherboard of industry standard PCs. We are also upgrading the SLMs to have analog optical modulation capability. The PCI card contains input and output image buffers, plus high-speed circuitry which digitizes the four analog output channels of the correlator's camera. This paper describes the system we are developing, some of the electronic and optical engineering issues involved, and the present status of our work.

  7. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    WU Yu

    2013-08-01

    Full Text Available In order to better ensure data security,data integrity,and facilitate remote management,this paper has designed and implemented a system model for PC data synchronous backup from the view of the local database and personal data. It focuses on the data backup and uses SQL Azure( a cloud database management system and Visual Studio( a development platform tool . Also the system is released and deployed on the Windows Azure Platform with a unique web portal. Experimental tests show that compared to other data backup methods in non-cloud environment,the system has certain advantages and research value on mobility,interoperability and data management.

  8. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    Science.gov (United States)

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  9. Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations.

    Science.gov (United States)

    Hayes, Andrew F; Matthes, Jörg

    2009-08-01

    Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.

  10. SHTP-E, a computer implementation of the finite-difference embedding method of ablation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Randall, J D

    1978-05-01

    PL/I procedures have been developed that use finite-difference techniques to analyze ablation problems by embedding them in inverse-heat-conduction problems with no moving boundaries. The procedures form a set of subroutines that can be called from a problem-oriented main program written by the user. The procedures include provisions for one-, two-, or three-dimensional conduction, parallel modes of heat transfer, thermal contact, choices of implicit and explicit difference techniques, temperature-dependent and directional thermal properties, radiation relief, aerodynamic heating, chemical ablation, and material removal from combinations of flat, cylindrical, and spherical surfaces. This report is meant to serve as a source of underlying theory not covered elsewhere and as a user's manual for the PL/I procedures. Also included are useful debugging aids and external identifiers, a directory of Applied Physics Laboratory computer libraries pertaining to the PL/I procedures, and an illustrative problem as an example.

  11. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  12. Implementation of combined SVM-algorithm and computer-aided perception feedback for pulmonary nodule detection

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Rannou, Didier; Brennan, Patrick C.

    2012-02-01

    This pilot study examines the effect of a novel decision support system in medical image interpretation. This system is based on combining image spatial frequency properties and eye-tracking data in order to recognize over and under calling errors. Thus, before it can be implemented as a detection aided schema, training is required during which SVMbased algorithm learns to recognize FP from all reported outcomes, and, FN from all unreported prolonged dwelled regions. Eight radiologists inspected 50 PA chest radiographs with the specific task of identifying lung nodules. Twentyfive cases contained CT proven subtle malignant lesions (5-20mm), but prevalence was not known by the subjects, who took part in two sequential reading sessions, the second, without and with support system feedback. MCMR ROC DBM and JAFROC analyses were conducted and demonstrated significantly higher scores following feedback with p values of 0.04, and 0.03 respectively, highlighting significant improvements in radiology performance once feedback was used. This positive effect on radiologists' performance might have important implications for future CAD-system development.

  13. Design and implementation of a computer based site operations log for the ARM Program

    International Nuclear Information System (INIS)

    Tichler, J.L.; Bernstein, H.J.; Bobrowski, S.F.; Melton, R.B.; Campbell, A.P.; Edwards, D.M.; Kanciruk, P.; Singley, P.T.

    1992-01-01

    The Atmospheric Radiation Measurement (ARM) Program is a Department of Energy (DOE) research effort to reduce the uncertainties found in general circulation and other models due to the effects of clouds and solar radiation. ARM will provide an experimental testbed for the study of important atmospheric effects, particularly cloud and radiative processes, and testing of parameterizations of the processes for use in atmospheric models. The design of the testbed known as the Clouds and Radiation Testbed (CART), calls for five, long-term field data collection sites. The first site, located in the Southern Great Plains (SGP) in Lamont, OK began operation in the spring of 1992. The CART Data Environment (CDE) is the element of the testbed which acquires the basic observations from the instruments and processes them to meet the ARM requirements. A formal design was used to develop a description of the logical requirements for the CDE. This paper discusses the design and prototype implementation of a part of the CDE known as the site operations log, which records metadata defining the environment within which the data produced by the instruments is collected

  14. Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    Directory of Open Access Journals (Sweden)

    Helmberger Thomas

    2001-10-01

    Full Text Available Abstract Background Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. Methods The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS. It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology 2. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. Results During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. Conclusion Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring.

  15. Implementation of internet training on posture reform of computer users in iran.

    Science.gov (United States)

    Keykhaie, Zohreh; Zareban, Iraj; Shahrakipoor, Mahnaz; Hormozi, Maryam; Sharifi-Rad, Javad; Masoudi, Gholamreza; Rahimi, Fatemeh

    2014-12-01

    Musculoskeletal disorders are of common problems among computer (PC) users. Training of posture reform plays a significant role in the prevention of the emergence, progression and complications of these diseases. The present research was performed to study the effect of the Internet training on the posture reform of the Internet users working in two Iranian universities including Sistan and Baluchestan University and Islamic Azad University of Zahedan in 2014. This study was a quasi-experimental intervention with control group and conducted in two Iranian universities including Sistan and Baluchestan University and Islamic Azad University of Zahedan. The study was done on 160 PC users in the two groups of intervention (80 people) and control (80 people). Training PowerPoint was sent to the intervention group through the Internet and a post test was given to them after 45 days. Statistical software of SPSS 19 and statistical tests of Kolmogrov, t-test, Fisher Exact test, and correlation coefficient were used for data analysis. After the training, the mean scores of knowledge, attitude, performance and self-efficacy in the intervention group were 24.21 ± 1.34, 38.36 ± 2.89, 7.59 ± 1.16, and 45.06 ± 4.11, respectively (P Internet had a significant impact on the posture reform of the PC users. According to the findings observed, there was a significant relationship between the scores of self-efficacy-performance after training. Therefore, based on the findings of the study, it is suggested that Internet training to increase self-efficacy approach in the successive periods can be effective to reform the postures of PC users.

  16. Time expenditure in computer aided time studies implemented for highly mechanized forest equipment

    Directory of Open Access Journals (Sweden)

    Elena Camelia Mușat

    2016-06-01

    Full Text Available Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01. Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce

  17. Clinical Implementation of Intrafraction Cone Beam Computed Tomography Imaging During Lung Tumor Stereotactic Ablative Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ruijiang; Han, Bin; Meng, Bowen [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Maxim, Peter G.; Xing, Lei; Koong, Albert C. [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States); Diehn, Maximilian, E-mail: Diehn@Stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States); Institute for Stem Cell Biology and Regenerative Medicine, Stanford University School of Medicine, Stanford, California (United States); Loo, Billy W., E-mail: BWLoo@Stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States)

    2013-12-01

    Purpose: To develop and clinically evaluate a volumetric imaging technique for assessing intrafraction geometric and dosimetric accuracy of stereotactic ablative radiation therapy (SABR). Methods and Materials: Twenty patients received SABR for lung tumors using volumetric modulated arc therapy (VMAT). At the beginning of each fraction, pretreatment cone beam computed tomography (CBCT) was used to align the soft-tissue tumor position with that in the planning CT. Concurrent with dose delivery, we acquired fluoroscopic radiograph projections during VMAT using the Varian on-board imaging system. Those kilovolt projections acquired during millivolt beam-on were automatically extracted, and intrafraction CBCT images were reconstructed using the filtered backprojection technique. We determined the time-averaged target shift during VMAT by calculating the center of mass of the tumor target in the intrafraction CBCT relative to the planning CT. To estimate the dosimetric impact of the target shift during treatment, we recalculated the dose to the GTV after shifting the entire patient anatomy according to the time-averaged target shift determined earlier. Results: The mean target shift from intrafraction CBCT to planning CT was 1.6, 1.0, and 1.5 mm; the 95th percentile shift was 5.2, 3.1, 3.6 mm; and the maximum shift was 5.7, 3.6, and 4.9 mm along the anterior-posterior, left-right, and superior-inferior directions. Thus, the time-averaged intrafraction gross tumor volume (GTV) position was always within the planning target volume. We observed some degree of target blurring in the intrafraction CBCT, indicating imperfect breath-hold reproducibility or residual motion of the GTV during treatment. By our estimated dose recalculation, the GTV was consistently covered by the prescription dose (PD), that is, V100% above 0.97 for all patients, and minimum dose to GTV >100% PD for 18 patients and >95% PD for all patients. Conclusions: Intrafraction CBCT during VMAT can provide

  18. Models and methods for design and implementation of computer based control and monitoring systems for production cells

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk

    based control and monitoring systems for production cells.The manufacturing environment and the current practice for engineering of cells control systems are described and automation software enablers are discussed. A number of problems related to these issues are identified. In order to support......This dissertation is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implementation of computer...... engineering of cell control systems by the use of enablers, a generic cell control data model and an architecture are defined. Further, an engineering methodology is defined. The three element enablers, architecture and methodology consitutes the Cell Control Engineering concept which is defined and evaluated...

  19. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    Science.gov (United States)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  20. Development and implementation of a critical pathway for prevention of adverse reactions to contrast media for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Keun Jo [Presbyterian Medical Center, Seoul (Korea, Republic of); Kweon, Dae Cheol; Kim, Myeong Goo [Seoul National University Hospital, Seoul (Korea, Republic of); Yoo, Beong Gyu [Wonkwang Health Science College, Iksan (Korea, Republic of)

    2007-03-15

    The purpose of this study is to develop a critical pathway (CP) for the prevention of adverse reactions to contrast media for computed tomography. The CP was developed and implemented by a multidisciplinary group is Seoul National University Hospital. The CP was applied to CT patients. Patients who underwent CT scanning were included in the CP group from March in 2004. The satisfaction of the patients with CP was compared with non-CP groups. We also investigated the degree of satisfaction among the radiological technologists and nurses. The degree of patient satisfaction with the care process increased patient information (24%), prevention of adverse reactions to contrast media (19%), pre-cognitive effect of adverse reactions to contrast media (39%) and information degree of adverse reactions to contrast media (19%). This CP program can be used as one of the patient care tools for reducing the adverse reactions to contrast media and increasing the efficiency of care process in CT examination settings.

  1. Development and implementation of a critical pathway for prevention of adverse reactions to contrast media for computed tomography

    International Nuclear Information System (INIS)

    Jang, Keun Jo; Kweon, Dae Cheol; Kim, Myeong Goo; Yoo, Beong Gyu

    2007-01-01

    The purpose of this study is to develop a critical pathway (CP) for the prevention of adverse reactions to contrast media for computed tomography. The CP was developed and implemented by a multidisciplinary group is Seoul National University Hospital. The CP was applied to CT patients. Patients who underwent CT scanning were included in the CP group from March in 2004. The satisfaction of the patients with CP was compared with non-CP groups. We also investigated the degree of satisfaction among the radiological technologists and nurses. The degree of patient satisfaction with the care process increased patient information (24%), prevention of adverse reactions to contrast media (19%), pre-cognitive effect of adverse reactions to contrast media (39%) and information degree of adverse reactions to contrast media (19%). This CP program can be used as one of the patient care tools for reducing the adverse reactions to contrast media and increasing the efficiency of care process in CT examination settings

  2. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study

    Directory of Open Access Journals (Sweden)

    Ahmad Farah

    2012-12-01

    Full Text Available Abstract Background Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA, which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff regarding the factors required for implementation of an interactive HRA on psychosocial health. Methods The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: “What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?” The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as ‘it made sense to them’. Participants also rated each item on a 5-point scale for its ‘importance’ and ‘action feasibility’ over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Results Overall, 54 providers participated (emergency care 48%; primary care 52%. Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis

  3. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study.

    Science.gov (United States)

    Ahmad, Farah; Norman, Cameron; O'Campo, Patricia

    2012-12-19

    Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA), which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff) regarding the factors required for implementation of an interactive HRA on psychosocial health. The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: "What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?" The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as 'it made sense to them'. Participants also rated each item on a 5-point scale for its 'importance' and 'action feasibility' over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Overall, 54 providers participated (emergency care 48%; primary care 52%). Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis revealed a concept map with a seven-cluster solution: 1) the HRA

  4. GPCALMA: implementation in Italian hospitals of a computer aided detection system for breast lesions by mammography examination.

    Science.gov (United States)

    Lauria, Adele

    2009-06-01

    We describe the implementation in several Italian hospitals of a computer aided detection (CAD) system, named GPCALMA (grid platform for a computer aided library in mammography), for the automatic search of lesions in X-ray mammographies. GPCALMA has been under development since 1999 by a community of physicists of the Italian National Institute for Nuclear Physics (INFN) in collaboration with radiologists. This CAD system was tested as a support to radiologists in reading mammographies. The main system components are: (i) the algorithms implemented for the analysis of digitized mammograms to recognize suspicious lesions, (ii) the database of digitized mammographic images, and (iii) the PC-based digitization and analysis workstation and its user interface. The distributed nature of data and resources and the prevalence of geographically remote users suggested the development of the system as a grid application: the design of this networked version is also reported. The paper describes the system architecture, the database of digitized mammographies, the clinical workstation and the medical applications carried out to characterize the system. A commercial CAD was evaluated in a comparison with GPCALMA by analysing the medical reports obtained with and without the two different CADs on the same dataset of images: with both CAD a statistically significant increase in sensitivity was obtained. The sensitivity in the detection of lesions obtained for microcalcification and masses was 96% and 80%, respectively. An analysis in terms of receiver operating characteristic (ROC) curve was performed for massive lesion searches, achieving an area under the ROC curve of A(z)=0.783+/-0.008. Results show that the GPCALMA CAD is ready to be used in the radiological practice, both for screening mammography and clinical studies. GPCALMA is a starting point for the development of other medical imaging applications such as the CAD for the search of pulmonary nodules, currently under

  5. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 2: Computational implementation and first results

    Science.gov (United States)

    Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina

    2017-11-01

    This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M risk reduction.

  6. Computing the Stackelberg/Nash equilibria using the extraproximal method: Convergence analysis and implementation details for Markov chains games

    Directory of Open Access Journals (Sweden)

    Trejo Kristal K.

    2015-06-01

    Full Text Available In this paper we present the extraproximal method for computing the Stackelberg/Nash equilibria in a class of ergodic controlled finite Markov chains games. We exemplify the original game formulation in terms of coupled nonlinear programming problems implementing the Lagrange principle. In addition, Tikhonov’s regularization method is employed to ensure the convergence of the cost-functions to a Stackelberg/Nash equilibrium point. Then, we transform the problem into a system of equations in the proximal format. We present a two-step iterated procedure for solving the extraproximal method: (a the first step (the extra-proximal step consists of a “prediction” which calculates the preliminary position approximation to the equilibrium point, and (b the second step is designed to find a “basic adjustment” of the previous prediction. The procedure is called the “extraproximal method” because of the use of an extrapolation. Each equation in this system is an optimization problem for which the necessary and efficient condition for a minimum is solved using a quadratic programming method. This solution approach provides a drastically quicker rate of convergence to the equilibrium point. We present the analysis of the convergence as well the rate of convergence of the method, which is one of the main results of this paper. Additionally, the extraproximal method is developed in terms of Markov chains for Stackelberg games. Our goal is to analyze completely a three-player Stackelberg game consisting of a leader and two followers. We provide all the details needed to implement the extraproximal method in an efficient and numerically stable way. For instance, a numerical technique is presented for computing the first step parameter (λ of the extraproximal method. The usefulness of the approach is successfully demonstrated by a numerical example related to a pricing oligopoly model for airlines companies.

  7. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  8. Towards high performance computing for molecular structure prediction using IBM Cell Broadband Engine--an implementation perspective.

    Science.gov (United States)

    Krishnan, S P T; Liang, Sim Sze; Veeravalli, Bharadwaj

    2010-01-18

    RNA structure prediction problem is a computationally complex task, especially with pseudo-knots. The problem is well-studied in existing literature and predominantly uses highly coupled Dynamic Programming (DP) solutions. The problem scale and complexity become embarrassingly humungous to handle as sequence size increases. This makes the case for parallelization. Parallelization can be achieved by way of networked platforms (clusters, grids, etc) as well as using modern day multi-core chips. In this paper, we exploit the parallelism capabilities of the IBM Cell Broadband Engine to parallelize an existing Dynamic Programming (DP) algorithm for RNA secondary structure prediction. We design three different implementation strategies that exploit the inherent data, code and/or hybrid parallelism, referred to as C-Par, D-Par and H-Par, and analyze their performances. Our approach attempts to introduce parallelism in critical sections of the algorithm. We ran our experiments on SONY Play Station 3 (PS3), which is based on the IBM Cell chip. Our results suggest that introducing parallelism in DP algorithm allows it to easily handle longer sequences which otherwise would consume a large amount of time in single core computers. The results further demonstrate the speed-up gain achieved in exploiting the inherent parallelism in the problem and also elicits the advantages of using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA. The speed-up performance reported here is promising, especially when sequence length is long. To the best of our literature survey, the work reported in this paper is probably the first-of-its-kind to utilize the IBM Cell Broadband Engine (a heterogeneous multi-core chip) to implement a DP. The results also encourage using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA to predict its secondary structure.

  9. Computer-implemented land use classification with pattern recognition software and ERTS digital data. [Mississippi coastal plains

    Science.gov (United States)

    Joyce, A. T.

    1974-01-01

    Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.

  10. Development and implementation of a low-cost phantom for quality control in cone beam computed tomography

    International Nuclear Information System (INIS)

    Batista, W. O.; Navarro, M. V. T.; Maia, A. F.

    2013-01-01

    A phantom for quality control in cone beam computed tomography (CBCT) scanners was designed and constructed, and a methodology for testing was developed. The phantom had a polymethyl methacrylate structure filled with water and plastic objects that allowed the assessment of parameters related to quality control. The phantom allowed the evaluation of essential parameters in CBCT as well as the evaluation of linear and angular dimensions. The plastics used in the phantom were chosen so that their density and linear attenuation coefficient were similar to those of human facial structures. Three types of CBCT equipment, with two different technological concepts, were evaluated. The results of the assessment of the accuracy of linear and angular dimensions agreed with the existing standards. However, other parameters such as computed tomography number accuracy, uniformity and high-contrast detail did not meet the tolerances established in current regulations or the manufacturer's specifications. The results demonstrate the importance of establishing specific protocols and phantoms, which meet the specificities of CBCT. The practicality of implementation, the quality control test results for the proposed phantom and the consistency of the results using different equipment demonstrate its adequacy. (authors)

  11. Implementation of a Quadrature Mirror Filter Bank on an SRC Reconfigurable Computer for Real-Time Signal Processing

    National Research Council Canada - National Science Library

    Stoffell, Kevin M

    2006-01-01

    .... The physical connections and signaling specifications for connecting an Analog to Digital converter to a Reconfigurable Computer system manufactured by SRC Computers Incorporated are discussed...

  12. COMPUTING

    CERN Document Server

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  15. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  16. The Anyang Esophageal Cancer Cohort Study: study design, implementation of fieldwork, and use of computer-aided survey system.

    Directory of Open Access Journals (Sweden)

    Fangfang Liu

    Full Text Available BACKGROUND: Human papillomavirus (HPV has been observed repeatedly in esophageal squamous cell carcinoma (ESCC tissues. However, the causal relationship between HPV infection and the onset of ESCC remains unknown. A large cohort study focusing on this topic is being carried out in rural Anyang, China. METHODOLOGY/PRINCIPAL FINDINGS: The Anyang Esophageal Cancer Cohort Study (AECCS is a population-based prospective endoscopic cohort study designed to investigate the association of HPV infection and ESCC. This paper provides information regarding the design and implementation of this study. In particular we describe the recruitment strategies and quality control procedures which have been put into place, and the custom designed computer-aided survey system (CASS used for this project. This system integrates barcode technology and unique identification numbers, and has been developed to facilitate real-time data management throughout the workflow using a wireless local area network. A total of 8,112 (75.3% of invited subjects participated in the baseline endoscopic examination; of those invited two years later to take part in the first cycle of follow-up, 91.9% have complied. CONCLUSIONS/SIGNIFICANCE: The AECCS study has high potential for evaluating the causal relationship between HPV infection and the occurrence of ESCC. The experience in setting up the AECCS may be beneficial for others planning to initiate similar epidemiological studies in developing countries.

  17. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    Science.gov (United States)

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  18. Implementation of Service Learning and Civic Engagement for Students of Computer Information Systems through a Course Project at the Hashemite University

    Science.gov (United States)

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2015-01-01

    Service learning methodologies provide students of information systems with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study which involves integrating a service learning project into an undergraduate Computer Information Systems course entitled…

  19. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  1. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  2. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  9. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  10. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  11. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  15. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  17. Can Teachers in Primary Education Implement a Metacognitive Computer Programme for Word Problem Solving in Their Mathematics Classes?

    Science.gov (United States)

    de Kock, Willem D.; Harskamp, Egbert G.

    2014-01-01

    Teachers in primary education experience difficulties in teaching word problem solving in their mathematics classes. However, during controlled experiments with a metacognitive computer programme, students' problem-solving skills improved. Also without the supervision of researchers, metacognitive computer programmes can be beneficial in a natural…

  18. Implementation of the Lucas-Kanade image registration algorithm on a GPU for 3D computational platform stabilisation

    CSIR Research Space (South Africa)

    Duvenhage, B

    2010-06-01

    Full Text Available . Real- time dense and accurate parallel optical flow using CUDA. In Proceedings of The 17th International Conference in Central Eu- rope on Computer Graphics, Visualization and Computer Vision, WSCG2009, 105–111. OWENS, J. D., HOUSTON, M., LUEBKE, D...

  19. An A.P.L. micro-programmed machine: implementation on a Multi-20 mini-computer, memory organization, micro-programming and flowcharts

    International Nuclear Information System (INIS)

    Granger, Jean-Louis

    1975-01-01

    This work deals with the presentation of an APL interpreter implemented on an MULTI 20 mini-computer. It includes a left to right syntax analyser, a recursive routine for generation and execution. This routine uses a beating method for array processing. Moreover, during the execution of all APL statements, dynamic memory allocation is used. Execution of basic operations has been micro-programmed. The basic APL interpreter has a length of 10 K bytes. It uses overlay methods. (author) [fr

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  1. MPI + OpenCL implementation of a phase-field method incorporating CALPHAD description of Gibbs energies on heterogeneous computing platforms

    Science.gov (United States)

    Gerald Tennyson, P.; G. M., Karthik; Phanikumar, G.

    2015-01-01

    Phase-field method uses a non-conserved order parameter to define the phase state of a system and is a versatile method for moving boundary problems. It is a method of choice for simulating microstructure evolution in the domain of materials engineering. Solution of phase-field evolution equations avoids explicit tracking of interfaces and is often implemented on a structured grid to capture microstructure evolution in a simple and elegant manner. Restrictions on the grid size to accurately capture the interface curvature effects lead to large number of grid points in the computational domain and render the simulation computationally intensive for realistic simulations in 3D. However, the availability of powerful heterogeneous computing platforms and super clusters provides the advantage to perform large scale phase-field simulations efficiently. This paper discusses a portable implementation to extend simulations across multiple CPUs using MPI to include use of GPUs using OpenCL. The solution scheme adapts an isotropic stencil that avoids grid-induced anisotropy. Use of separate OpenCL kernels for problem specific portions of the code ensure that the approach can be extended to different problems. Performance analysis of parallel strategies used in the study illustrate the massively parallel computing possibility for phase-field simulations across heterogeneous platforms.

  2. Designing and Implementing a Computational Methods Course for Upper-level Undergraduates and Postgraduates in Atmospheric and Oceanic Sciences

    Science.gov (United States)

    Nelson, E.; L'Ecuyer, T. S.; Douglas, A.; Hansen, Z.

    2017-12-01

    In the modern computing age, scientists must utilize a wide variety of skills to carry out scientific research. Programming, including a focus on collaborative development, has become more prevalent in both academic and professional career paths. Faculty in the Department of Atmospheric and Oceanic Sciences at the University of Wisconsin—Madison recognized this need and recently approved a new course offering for undergraduates and postgraduates in computational methods that was first held in Spring 2017. Three programming languages were covered in the inaugural course semester and development themes such as modularization, data wrangling, and conceptual code models were woven into all of the sections. In this presentation, we will share successes and challenges in developing a research project-focused computational course that leverages hands-on computer laboratory learning and open-sourced course content. Improvements and changes in future iterations of the course based on the first offering will also be discussed.

  3. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  4. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    Science.gov (United States)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  5. Implementation of the RS232 communication trainer using computers and the ATMEGA microcontroller for interface engineering Courses

    Science.gov (United States)

    Amelia, Afritha; Julham; Viyata Sundawa, Bakti; Pardede, Morlan; Sutrisno, Wiwinta; Rusdi, Muhammad

    2017-09-01

    RS232 of serial communication is the communication system in the computer and microcontroller. This communication was studied in Department of Electrical Engineering and Department of Computer Engineering and Informatics Department at Politeknik Negeri Medan. Recently, an application of simulation was installed on the computer which used for teaching and learning process. The drawback of this system is not useful for communication method between learner and trainer. Therefore, this study was created method of 10 stage to which divided into 7 stages and 3 major phases. It can be namely the analysis of potential problems and data collection, trainer design, and empirical testing and revision. After that, the trainer and module were tested in order to get feedback from the learner. The result showed that 70.10% of feedback which wide reasonable from the learner of questionnaire.

  6. Implementation of Web-Based Education in Egypt through Cloud Computing Technologies and Its Effect on Higher Education

    Science.gov (United States)

    El-Seoud, M. Samir Abou; El-Sofany, Hosam F.; Taj-Eddin, Islam A. T. F.; Nosseir, Ann; El-Khouly, Mahmoud M.

    2013-01-01

    The information technology educational programs at most universities in Egypt face many obstacles that can be overcome using technology enhanced learning. An open source Moodle eLearning platform has been implemented at many public and private universities in Egypt, as an aid to deliver e-content and to provide the institution with various…

  7. Using Innovative Tools to Teach Computer Application to Business Students--A Hawthorne Effect or Successful Implementation Here to Stay

    Science.gov (United States)

    Khan, Zeenath Reza

    2014-01-01

    A year after the primary study that tested the impact of introducing blended learning and guided discovery to help teach computer application to business students, this paper looks into the continued success of using guided discovery and blended learning with learning management system in and out of classrooms to enhance student learning.…

  8. Districts' Efforts for Data Use and Computer Data Systems: The Role of Sensemaking in System Use and Implementation

    Science.gov (United States)

    Cho, Vincent; Wayman, Jeffrey C.

    2014-01-01

    Background: Increasingly, teachers and other educators are expected to leverage data in making educational decisions. Effective data use is difficult, if not impossible, without computer data systems. Nonetheless, these systems may be underused or even rejected by teachers. One potential explanation for such troubles may relate to how teachers…

  9. The evaluation of a national research plan to support the implementation of computers in education in The Netherlands (ED 310737)

    NARCIS (Netherlands)

    Moonen, J.C.M.M.; Collis, Betty; Koster, Klaas

    1990-01-01

    This paper describes the evolution of a national research plan for computers and education, an approach which was initiated in the Netherlands in 1983. Two phases can be recognized in the Dutch experience: one from 1984 until 1988 and one from 1989 until 1992. Building upon the experiences of the

  10. Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education

    Science.gov (United States)

    Zhamanov, Azamat; Yoo, Seong-Moo; Sakhiyeva, Zhulduz; Zhaparov, Meirambek

    2018-01-01

    Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods:…

  11. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    Science.gov (United States)

    2007-06-01

    outdated CRT monitors. The TCN servers often use a Citrix /UNIX model of data exchange 6 Shah, Dharmesh...Desktop connection with a CITRIX or Microsoft Terminal Server. Thin-clients are always connected directly to a monitor for video output and a...the Citrix Independent Computing Architecture (ICA) to display a Windows virtual 11 Lawrence J

  12. Development, Implementation, and Outcomes of an Equitable Computer Science After-School Program: Findings from Middle-School Students

    Science.gov (United States)

    Mouza, Chrystalla; Marzocchi, Alison; Pan, Yi-Cheng; Pollock, Lori

    2016-01-01

    Current policy efforts that seek to improve learning in science, technology, engineering, and mathematics (STEM) emphasize the importance of helping all students acquire concepts and tools from computer science that help them analyze and develop solutions to everyday problems. These goals have been generally described in the literature under the…

  13. Putting all that (HEP-) data to work - a REAL implementation of an unlimited computing and storage architecture

    International Nuclear Information System (INIS)

    Ernst, Michael

    1996-01-01

    Since computing in HEP left the Mainframe-Path, many institutions demonstrated a successful migration to workstation-based computing, especially for applications requiring a high CPU-to-I/O ratio. However, the difficulties and the complexity starts beyond just providing CPU-Cycles. Critical applications, requiring either sequential access to large amounts of data or to many small sets out of a multi 10-Terabyte Data Repository need technical approaches we have not had so far. Though we felt that we were hardly able to follow technology evolving in the various fields, we recently had to realize that even politics overtook technical evolution - at least in the areas mentioned above. The USA is making peace with Russia. DEC is talking to IBM, SGI communicating with HP. All these things became true, and through, unfortunately, the Cold War lasted 50 years, and-in a relative sense-we were afraid that 50 years seemed to be how long any self respecting high performance computer (or a set of workstations) had to wait for data from its Server, fortunately, we are now facing a similar progress of friendliness, harmony and balance in the former problematic (computing) areas. Buzzwords, mentioned many thousand times in talks describing today's and future requirements, including Functionality, Reliability, Scalability, Modularity and Portability are not just phrases, wishes and dreams any longer. At DESY, we are in the process of demonstrating an architecture that is taking those five issues equally into consideration, including Heterogeneous Computing Platforms with ultimate file system approaches, Heterogeneous Mass Storage Devices and an Open Distributed Hierarchical Mass Storage Management System. This contribution will provide an overview on how far we got and what the next steps will be. (author)

  14. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  15. Costs associated with implementation of computer-assisted clinical decision support system for antenatal and delivery care: case study of Kassena-Nankana district of northern Ghana.

    Science.gov (United States)

    Dalaba, Maxwell Ayindenaba; Akweongo, Patricia; Williams, John; Saronga, Happiness Pius; Tonchev, Pencho; Sauerborn, Rainer; Mensah, Nathan; Blank, Antje; Kaltschmidt, Jens; Loukanova, Svetla

    2014-01-01

    This study analyzed cost of implementing computer-assisted Clinical Decision Support System (CDSS) in selected health care centres in Ghana. A descriptive cross sectional study was conducted in the Kassena-Nankana district (KND). CDSS was deployed in selected health centres in KND as an intervention to manage patients attending antenatal clinics and the labour ward. The CDSS users were mainly nurses who were trained. Activities and associated costs involved in the implementation of CDSS (pre-intervention and intervention) were collected for the period between 2009-2013 from the provider perspective. The ingredients approach was used for the cost analysis. Costs were grouped into personnel, trainings, overheads (recurrent costs) and equipment costs (capital cost). We calculated cost without annualizing capital cost to represent financial cost and cost with annualizing capital costs to represent economic cost. Twenty-two trained CDSS users (at least 2 users per health centre) participated in the study. Between April 2012 and March 2013, users managed 5,595 antenatal clients and 872 labour clients using the CDSS. We observed a decrease in the proportion of complications during delivery (pre-intervention 10.74% versus post-intervention 9.64%) and a reduction in the number of maternal deaths (pre-intervention 4 deaths versus post-intervention 1 death). The overall financial cost of CDSS implementation was US$23,316, approximately US$1,060 per CDSS user trained. Of the total cost of implementation, 48% (US$11,272) was pre-intervention cost and intervention cost was 52% (US$12,044). Equipment costs accounted for the largest proportion of financial cost: 34% (US$7,917). When economic cost was considered, total cost of implementation was US$17,128-lower than the financial cost by 26.5%. The study provides useful information in the implementation of CDSS at health facilities to enhance health workers' adherence to practice guidelines and taking accurate decisions to improve

  16. A schema for knowledge representation and its implementation in a computer-aided design and manufacturing system

    Energy Technology Data Exchange (ETDEWEB)

    Tamir, D.E.

    1989-01-01

    Modularity in the design and implementation of expert systems relies upon cooperation among the expert systems and communication of knowledge between them. A prerequisite for an effective modular approach is some standard for knowledge representation to be used by the developers of the different modules. In this work the author presents a schema for knowledge representation, and apply this schema in the design of a rule-based expert system. He also implements a cooperative expert system using the proposed knowledge representation method. A knowledge representation schema is a formal specification of the internal, conceptual, and external components of a knowledge base, each specified in a separate schema. The internal schema defines the structure of a knowledge base, the conceptual schema defines the concepts, and the external schema formalizes the pragmatics of a knowledge base. The schema is the basis for standardizing knowledge representation systems and it is used in the various phases of design and specification of the knowledge base. A new model of knowledge representation based on a pattern recognition interpretation of implications is developed. This model implements the concept of linguistic variables and can, therefore, emulate human reasoning with linguistic imprecision. The test case for the proposed schema of knowledge representation is a system is a cooperative expert system composed of two expert systems. This system applies a pattern recognition interpretation of a generalized one-variable implication with linguistic variables.

  17. Dispersed flow film boiling: An investigation of the possibility to improve the models implemented in the NRC computer codes for the reflooding phase of the LOCA

    International Nuclear Information System (INIS)

    Andreani, M.; Yadigaroglu, G.; Paul Scherrer Inst.

    1992-08-01

    Dispersed Flow Film Boiling is the heat transfer regime that occurs at high void fractions in a heated channel. The way this heat transfer mode is modelled in the NRC computer codes (RELAP5 and TRAC) and the validity of the assumptions and empirical correlations used is discussed. An extensive review of the theoretical and experimental work related with heat transfer to highly dispersed mixtures reveals the basic deficiencies of these models: the investigation refers mostly to the typical conditions of low rate bottom reflooding, since the simulation of this physical situation by the computer codes has often showed poor results. The alternative models that are available in the literature are reviewed, and their merits and limits are highlighted. The modifications that could improve the physics of the models implemented in the codes are identified

  18. Power Spectrum Computation for an Arbitrary Phase Noise Using Middleton's Convolution Series: Implementation Guideline and Experimental Illustration.

    Science.gov (United States)

    Brochard, Pierre; Sudmeyer, Thomas; Schilt, Stephane

    2017-11-01

    In this paper, we revisit the convolution series initially introduced by Middleton several decades ago to determine the power spectrum (or spectral line shape) of a periodic signal from its phase noise power spectral density. This topic is of wide interest, as it has an important impact on many scientific areas that involve lasers and oscillators. We introduce a simple guideline that enables a fairly straightforward computation of the power spectrum corresponding to an arbitrary phase noise. We show the benefit of this approach on a computational point of view, and apply it to various types of experimental signals with different phase noise levels, showing a very good agreement with the experimental spectra. This approach also provides a qualitative and intuitive understanding of the power spectrum corresponding to different regimes of phase noise.

  19. Implementation issues for mobile-wireless infrastructure and mobile health care computing devices for a hospital ward setting.

    Science.gov (United States)

    Heslop, Liza; Weeding, Stephen; Dawson, Linda; Fisher, Julie; Howard, Andrew

    2010-08-01

    mWard is a project whose purpose is to enhance existing clinical and administrative decision support and to consider mobile computers, connected via wireless network, for bringing clinical information to the point of care. The mWard project allowed a limited number of users to test and evaluate a selected range of mobile-wireless infrastructure and mobile health care computing devices at the neuroscience ward at Southern Health's Monash Medical Centre, Victoria, Australia. Before the project commenced, the ward had two PC's which were used as terminals by all ward-based staff and numerous multi-disciplinary staff who visited the ward each day. The first stage of the research, outlined in this paper, evaluates a selected range of mobile-wireless infrastructure.

  20. Development of point Kernel radiation shielding analysis computer program implementing recent nuclear data and graphic user interfaces

    International Nuclear Information System (INIS)

    Kang, S.; Lee, S.; Chung, C.

    2002-01-01

    There is an increasing demand for safe and efficient use of radiation and radioactive work activity along with shielding analysis as a result the number of nuclear and conventional facilities using radiation or radioisotope rises. Most Korean industries and research institutes including Korea Power Engineering Company (KOPEC) have been using foreign computer programs for radiation shielding analysis. Korean nuclear regulations have introduced new laws regarding the dose limits and radiological guides as prescribed in the ICRP 60. Thus, the radiation facilities should be designed and operated to comply with these new regulations. In addition, the previous point kernel shielding computer code utilizes antiquated nuclear data (mass attenuation coefficient, buildup factor, etc) which were developed in 1950∼1960. Subsequently, the various nuclear data such mass attenuation coefficient, buildup factor, etc. have been updated during the past few decades. KOPEC's strategic directive is to become a self-sufficient and independent nuclear design technology company, thus KOPEC decided to develop a new radiation shielding computer program that included the latest regulatory requirements and updated nuclear data. This new code was designed by KOPEC with developmental cooperation with Hanyang University, Department of Nuclear Engineering. VisualShield is designed with a graphical user interface to allow even users unfamiliar to radiation shielding theory to proficiently prepare input data sets and analyzing output results

  1. Implementation of a cell-wise block-Gauss-Seidel iterative method for SN transport on a hybrid parallel computer architecture

    International Nuclear Information System (INIS)

    Rosa, Massimiliano; Warsa, James S.; Perks, Michael

    2011-01-01

    We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S n transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine™ (Cell/B.E.) 1 . LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S n angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S n transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems. (author)

  2. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    T P(t), (81) where T is the clock period, and for the inexact circuit the energy is Ẽ(t) = T P̃(t). (82) In digital circuits , the total power P...Y , Cb, or Cr) data X input vector to a digital logic circuit Y output of a digital logic circuit Y luminance Z quantization factor matrix zi,j (i...accuracy in digital logic circuits . The contribution of this research will be to advance the state of the art of inexact computing by optimizing the JPEG

  3. Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations.

    Science.gov (United States)

    Genewein, Tim; Braun, Daniel A

    2016-06-01

    Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.

  4. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    Science.gov (United States)

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean

  5. Teacher Conceptions and Approaches Associated with an Immersive Instructional Implementation of Computer-Based Models and Assessment in a Secondary Chemistry Classroom

    Science.gov (United States)

    Waight, Noemi; Liu, Xiufeng; Gregorius, Roberto Ma.; Smith, Erica; Park, Mihwa

    2014-02-01

    This paper reports on a case study of an immersive and integrated multi-instructional approach (namely computer-based model introduction and connection with content; facilitation of individual student exploration guided by exploratory worksheet; use of associated differentiated labs and use of model-based assessments) in the implementation of coupled computer-based models and assessment in a high-school chemistry classroom. Data collection included in-depth teacher interviews, classroom observations, student interviews and researcher notes. Teacher conceptions highlighted the role of models as tools; the benefits of abstract portrayal via visualizations; appropriate enactment of model implementation; concerns with student learning and issues with time. The case study revealed numerous challenges reconciling macro, submicro and symbolic phenomena with the NetLogo model. Nonetheless, the effort exhibited by the teacher provided a platform to support the evolution of practice over time. Students' reactions reflected a continuum of confusion and benefits which were directly related to their background knowledge and experiences with instructional modes. The findings have implications for the role of teacher knowledge of models, the modeling process and pedagogical content knowledge; the continuum of student knowledge as novice users and the role of visual literacy in model decoding, comprehension and translation.

  6. FORMALIZATION OF THE ACCOUNTING VALUABLE MEMES METHOD FOR THE PORTFOLIO OF ORGANIZATION DEVELOPMENT AND INFORMATION COMPUTER TOOLS FOR ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Serhii D. Bushuiev

    2017-12-01

    Full Text Available The current state of project management has been steadily demonstrating a trend toward increasing the role of flexible "soft" management practices. A method for preparing solutions for the formation of a value-oriented portfolio based on a comparison of the level of internal organizational values is proposed. The method formalizes the methodological foundations of value-oriented portfolio management in the development of organizations in the form of approaches, basic terms and technological methods with ICT using, which makes it possible to use them as an integral knowledge system for creating an automated system for managing portfolios of organizations. The result of the study is the deepening of the theoretical provisions for managing the development of organizations through the implementation of a value-oriented portfolio of projects, which allowed formalize the method of recording value memes in the development portfolios of organizations, to disclose its logic, essence, objective basis and rules.

  7. Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.

    Science.gov (United States)

    Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan

    2017-06-01

    Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Computational elaboration and implementation of a checklist for the diagnosis of initial stages of a radioactive wastes management in a research center

    International Nuclear Information System (INIS)

    Bahia, Jean V.

    2007-01-01

    A specific methodology must be applied for the elaboration and implementation of a Radioactive Waste Management Program RWMP. After the implementation of the RWMP, periodic reevaluations must be foreseen for both updating and assuring compliance with applicable legal requirements. One of the main steps of the elaboration or reevaluation of the RWMP of a given facility is the diagnosis of the issues related to wastes generation where they are generated (diagnosis at source). Some of the information gathered during the diagnosis are: the identification of the generation points and respective operations of waste generation; the characteristics and quantification of the radioactive wastes; the waste collection and local storage status; the techniques used for preventing and minimizing the generation of radioactive wastes, including practices of waste segregation; and the procedures for control of the radioactive wastes generated at each generation point. In this diagnosis is also enclosed the identification of the staff, at each radioactive waste generation point, who is directly or indirectly in charge of the waste management activities. The objective of this paper is to describe the adopted methodology and the computational implementation of a checklist to verify the current situation of the radioactive waste management at the source. (author)

  9. A ray casting method for the computation of the area of feasible solutions for multicomponent systems: Theory, applications and FACPACK-implementation.

    Science.gov (United States)

    Sawall, Mathias; Neymeyr, Klaus

    2017-04-01

    Multivariate curve resolution methods suffer from the non-uniqueness of the solutions. The set of possible nonnegative solutions can be represented by the so-called Area of Feasible Solutions (AFS). The AFS for an s-component system is a bounded (s-1)-dimensional set. The numerical computation and the geometric construction of the AFS is well understood for two- and three-component systems but gets much more complicated for systems with four or even more components. This work introduces a new and robust ray casting method for the computation of the AFS for general s-component systems. The algorithm shoots rays from the origin and records the intersections of these rays with the AFS. The ray casting method is computationally fast, stable with respect to noise and is able to detect the various possible shapes of the AFS sets. The easily implementable algorithm is tested for various three- and four-component data sets. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    This article proposes a Rayleigh’s quotient–based damage detection algorithm. It aims at efficiently revealing nascent structural changes on a given structure with the capability to differentiate between an actual damage and a change in operational conditions. The first three damage detection lev...... cases and estimated the damage severity with acceptable accuracy. The conclusion is that the proposed algorithm was able to efficiently detect damage appearance in a range of structures for various damage levels and locations, and under different operational conditions.......This article proposes a Rayleigh’s quotient–based damage detection algorithm. It aims at efficiently revealing nascent structural changes on a given structure with the capability to differentiate between an actual damage and a change in operational conditions. The first three damage detection...... optimization methods. Field implementation strategies are also considered for the purpose of online damage monitoring. In order to prove the efficiency of this strategy, one experimental and three numerical case studies were conducted. The proposed algorithm successfully detected the damage in all simulated...

  11. Clinical implementation of an emergency department coronary computed tomographic angiography protocol for triage of patients with suspected acute coronary syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Ghoshhajra, Brian B.; Staziaki, Pedro V.; Vadvala, Harshna; Kim, Phillip; Meyersohn, Nandini M.; Janjua, Sumbal A.; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Takx, Richard A.P. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Neilan, Tomas G.; Francis, Sanjeev [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Bittner, Daniel [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nuernberg (FAU), Department of Medicine 2 - Cardiology, Erlangen (Germany); Mayrhofer, Thomas [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Stralsund University of Applied Sciences, School of Business Studies, Stralsund (Germany); Greenwald, Jeffrey L. [Massachusetts General Hospital and Harvard Medical School, Department of Medicine, Boston, MA (United States); Truong, Quyhn A. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Weill Cornell College of Medicine, Department of Radiology, New York, NY (United States); Abbara, Suhny [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); UT Southwestern Medical Center, Department Cardiothoracic Imaging, Dallas, TX (United States); Brown, David F.M.; Nagurney, John T. [Massachusetts General Hospital and Harvard Medical School, Department of Emergency Medicine, Boston, MA (United States); Januzzi, James L. [Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Collaboration: MGH Emergency Cardiac CTA Program Contributors

    2017-07-15

    To evaluate the efficiency and safety of emergency department (ED) coronary computed tomography angiography (CTA) during a 3-year clinical experience. Single-center registry of coronary CTA in consecutive ED patients with suspicion of acute coronary syndrome (ACS). The primary outcome was efficiency of coronary CTA defined as the length of hospitalization. Secondary endpoints of safety were defined as the rate of downstream testing, normalcy rates of invasive coronary angiography (ICA), absence of missed ACS, and major adverse cardiac events (MACE) during follow-up, and index radiation exposure. One thousand twenty two consecutive patients were referred for clinical coronary CTA with suspicion of ACS. Overall, median time to discharge home was 10.5 (5.7-24.1) hours. Patient disposition was 42.7 % direct discharge from the ED, 43.2 % discharge from emergency unit, and 14.1 % hospital admission. ACS rate during index hospitalization was 9.1 %. One hundred ninety two patients underwent additional diagnostic imaging and 77 underwent ICA. The positive predictive value of CTA compared to ICA was 78.9 % (95 %-CI 68.1-87.5 %). Median CT radiation exposure was 4.0 (2.5-5.8) mSv. No ACS was missed; MACE at follow-up after negative CTA was 0.2 %. Coronary CTA in an experienced tertiary care setting allows for efficient and safe management of patients with suspicion for ACS. (orig.)

  12. eCRAM computer algorithm for implementation of the charge ratio analysis method to deconvolute electrospray ionization mass spectra

    Science.gov (United States)

    Maleknia, Simin D.; Green, David C.

    2010-02-01

    A computer program (eCRAM) has been developed for automated processing of electrospray mass spectra based on the charge ratio analysis method. The eCRAM algorithm deconvolutes electrospray mass spectra solely from the ratio of mass-to-charge (m/z) values of multiply charged ions. The program first determines the ion charge by correlating the ratio of m/z values for any two (i.e., consecutive or non-consecutive) multiply charged ions to the unique ratios of two integers. The mass, and subsequently the identity of the charge carrying species, is further determined from m/z values and charge states of any two ions. For the interpretation of high-resolution electrospray mass spectra, eCRAM correlates isotopic peaks that share the same isotopic compositions. This process is also performed through charge ratio analysis after correcting the multiply charged ions to their lowest common ion charge. The application of eCRAM algorithm has been demonstrated with theoretical mass-to-charge ratios for proteins lysozyme and carbonic anhydrase, as well as experimental data for both low and high-resolution FT-ICR electrospray mass spectra of a range of proteins (ubiquitin, cytochrome c, transthyretin, lysozyme and calmodulin). This also included the simulated data for mixtures by combining experimental data for ubiquitin, cytochrome c and transthyretin.

  13. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: How we do it

    International Nuclear Information System (INIS)

    Schlorhaufer, C.; Behrends, M.; Diekhaus, G.; Keberle, M.; Weidemann, J.

    2012-01-01

    Purpose: Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Materials and methods: Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. Results: The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Conclusions: Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools.

  14. Implementation of a cell-wise Block-Gauss-Seidel iterative method for SN transport on a hybrid parallel computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Massimiliano [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Perks, Michael [Los Alamos National Laboratory

    2010-12-14

    We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S{sub n} transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine{trademark} (Cell/B.E.). LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S{sub n} angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S{sub n} transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems.

  15. Fish and chips: implementation of a neural network model into computer chips to maximize swimming efficiency in autonomous underwater vehicles.

    Science.gov (United States)

    Blake, R W; Ng, H; Chan, K H S; Li, J

    2008-09-01

    Recent developments in the design and propulsion of biomimetic autonomous underwater vehicles (AUVs) have focused on boxfish as models (e.g. Deng and Avadhanula 2005 Biomimetic micro underwater vehicle with oscillating fin propulsion: system design and force measurement Proc. 2005 IEEE Int. Conf. Robot. Auto. (Barcelona, Spain) pp 3312-7). Whilst such vehicles have many potential advantages in operating in complex environments (e.g. high manoeuvrability and stability), limited battery life and payload capacity are likely functional disadvantages. Boxfish employ undulatory median and paired fins during routine swimming which are characterized by high hydromechanical Froude efficiencies (approximately 0.9) at low forward speeds. Current boxfish-inspired vehicles are propelled by a low aspect ratio, 'plate-like' caudal fin (ostraciiform tail) which can be shown to operate at a relatively low maximum Froude efficiency (approximately 0.5) and is mainly employed as a rudder for steering and in rapid swimming bouts (e.g. escape responses). Given this and the fact that bioinspired engineering designs are not obligated to wholly duplicate a biological model, computer chips were developed using a multilayer perception neural network model of undulatory fin propulsion in the knifefish Xenomystus nigri that would potentially allow an AUV to achieve high optimum values of propulsive efficiency at any given forward velocity, giving a minimum energy drain on the battery. We envisage that externally monitored information on flow velocity (sensory system) would be conveyed to the chips residing in the vehicle's control unit, which in turn would signal the locomotor unit to adopt kinematics (e.g. fin frequency, amplitude) associated with optimal propulsion efficiency. Power savings could protract vehicle operational life and/or provide more power to other functions (e.g. communications).

  16. Diabetes patients' experiences with the implementation of insulin therapy and their perceptions of computer-assisted self-management systems for insulin therapy.

    Science.gov (United States)

    Simon, Airin Cr; Gude, Wouter T; Holleman, Frits; Hoekstra, Joost Bl; Peek, Niels

    2014-10-23

    Computer-assisted decision support is an emerging modality to assist patients with type 2 diabetes mellitus (T2DM) in insulin self-titration (ie, self-adjusting insulin dose according to daily blood glucose levels). Computer-assisted insulin self-titration systems mainly focus on helping patients overcome barriers related to the cognitive components of insulin titration. Yet other (eg, psychological or physical) barriers could still impede effective use of such systems. Our primary aim was to identify experiences with and barriers to self-monitoring of blood glucose, insulin injection, and insulin titration among patients with T2DM. Our research team developed a computer-assisted insulin self-titration system, called PANDIT. The secondary aim of this study was to evaluate patients' perceptions of computer-assisted insulin self-titration. We included patients who used PANDIT in a 4-week pilot study as well as patients who had never used such a system. In-depth, semi-structured interviews were conducted individually with patients on insulin therapy who were randomly recruited from a university hospital and surrounding general practices in the Netherlands. The interviews were transcribed verbatim and analyzed qualitatively. To classify the textual remarks, we created a codebook during the analysis, in a bottom-up and iterative fashion. To support examination of the final coded data, we used three theories from the field of health psychology and the integrated model of user satisfaction and technology acceptance by Wixom and Todd. When starting insulin therapy, some patients feared a lifelong commitment to insulin therapy and disease progression. Also, many barriers arose when implementing insulin therapy (eg, some patients were embarrassed to inject insulin in public). Furthermore, patients had difficulties increasing the insulin dose because they fear hypoglycemia, they associate higher insulin doses with disease progression, and some were ignorant of treatment

  17. Experimental DNA computing

    NARCIS (Netherlands)

    Henkel, Christiaan

    2005-01-01

    Because of their information storing and processing capabilities, nucleic acids are interesting building blocks for molecular scale computers. Potential applications of such DNA computers range from massively parallel computation to computational gene therapy. In this thesis, several implementations

  18. The implementation of the CDC version of RELAP5/MOD1/019 on an IBM compatible computer system (AMDAHL 470/V8)

    International Nuclear Information System (INIS)

    Kolar, W.; Brewka, W.

    1984-01-01

    RELAP5/MOD1 is an advanced one-dimensional best estimate system code, which is used for safety analysis studies of nuclear pressurized water reactor systems and related integral and separate effect test facilities. The program predicts the system response for large break, small break LOCA and special transients. To a large extent RELAP5/MOD1 is written in Fortran, only a small part of the program is coded in CDC assembler. RELAP5/MOD1 was developed on the CDC CYBER 176 at INEL*. The code development team made use of CDC system programs like the CDC UPDATE facility and incorporated in the program special purpose software packages. The report describes the problems which have been encountered when implementing the CDC version of RELAP5/MOD1 on an IBM compatible computer systems (AMDAHL 470/V8)

  19. Implementation of the thermal-hydraulic transient analysis code RELAP4/MOD5 and MOD6 on the FACOM 230/75 computer system

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Ishigai, Takahiro; Kumakura, Toshimasa; Naraoka, Ken-itsu

    1979-03-01

    Development efforts have continued on the extensively used LOCA analysis code RELAP-4, as seen in its history; that is, from the prototype version MOD2 to the latest one MOD6 which is capable of one-through calculations from blowdown to reflood phase of PWR-LOCA. Many improvements and refinements of the models have enlarged the scopes and extents of phenomena to treat. Correspondingly the size of program has increased version to version, and special programming techniques have continuously been introduced to manage the program within limited capacity of core memory. For example, the Dynamic Storage Allocation of MOD5 and the PRELOAD Preprocessor newly incorporated in MOD6 are those designed for the CDC computer with relatively small core size. Described are these programming techniques in detail and experiences on implementation of the codes on FACOM 230/75, together with some results of confirmatory calculations. (author)

  20. Reflections on the Implementation of Low-Dose Computed Tomography Screening in Individuals at High Risk of Lung Cancer in Spain.

    Science.gov (United States)

    Garrido, Pilar; Sánchez, Marcelo; Belda Sanchis, José; Moreno Mata, Nicolás; Artal, Ángel; Gayete, Ángel; Matilla González, José María; Galbis Caravajal, José Marcelo; Isla, Dolores; Paz-Ares, Luis; Seijo, Luis M

    2017-10-01

    Lung cancer (LC) is a major public health issue. Despite recent advances in treatment, primary prevention and early diagnosis are key to reducing the incidence and mortality of this disease. A recent clinical trial demonstrated the efficacy of selective screening by low-dose computed tomography (LDCT) in reducing the risk of both lung cancer mortality and all-cause mortality in high-risk individuals. This article contains the reflections of an expert group on the use of LDCT for early diagnosis of LC in high-risk individuals, and how to evaluate its implementation in Spain. The expert group was set up by the Spanish Society of Pulmonology and Thoracic Surgery (SEPAR), the Spanish Society of Thoracic Surgery (SECT), the Spanish Society of Radiology (SERAM) and the Spanish Society of Medical Oncology (SEOM). Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. GPU-based implementation of an accelerated SR-NLUT based on N-point one-dimensional sub-principal fringe patterns in computer-generated holograms

    Directory of Open Access Journals (Sweden)

    Hee-Min Choi

    2015-06-01

    Full Text Available An accelerated spatial redundancy-based novel-look-up-table (A-SR-NLUT method based on a new concept of the N-point one-dimensional sub-principal fringe pattern (N-point1-D sub-PFP is implemented on a graphics processing unit (GPU for fast calculation of computer-generated holograms (CGHs of three-dimensional (3-Dobjects. Since the proposed method can generate the N-point two-dimensional (2-D PFPs for CGH calculation from the pre-stored N-point 1-D PFPs, the loading time of the N-point PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confirm that the average calculation time for one-object point has been reduced by 49.6% and 55.4% compared to those of the conventional 2-D SR-NLUT methods for each case of the 2-point and 3-point SR maps, respectively.

  2. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...... cybernetics and Maturana and Varela’s theory of autopoiesis, which are both erroneously taken to support info-computationalism....

  3. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 2: Computational implementation and first results

    Directory of Open Access Journals (Sweden)

    L. Peruzza

    2017-11-01

    Full Text Available This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017 and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude–scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014. Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M  >  6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M  <  6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered

  4. Implementation of Audio-Computer Assisted Self-Interview (ACASI) among adolescent girls in humanitarian settings: feasibility, acceptability, and lessons learned.

    Science.gov (United States)

    Falb, Kathryn; Tanner, Sophie; Asghar, Khudejha; Souidi, Samir; Mierzwa, Stan; Assazenew, Asham; Bakomere, Theresita; Mallinga, Pamela; Robinette, Katie; Tibebu, Woinishet; Stark, Lindsay

    2016-01-01

    Audio-Computer Assisted Self- Interview (ACASI) is a method of data collection in which participants listen to pre-recorded questions through headphones and respond to questions by selecting their answers on a touch screen or keypad, and is seen as advantageous for gathering data on sensitive topics such as experiences of violence. This paper seeks to explore the feasibility and acceptability of using ACASI with adolescent girls and to document the implementation of such an approach in two humanitarian settings: conflict-affected communities in eastern Democratic Republic of Congo (DRC) and refugee camps along the Sudan-Ethiopia border. This paper evaluates the feasibility and acceptability of implementing ACASI, based on the experiences of using this tool in baseline data collections for COMPASS (Creating Opportunities through Mentorship, Parental involvement, and Safe Spaces) impact evaluations in DRC ( N  = 868) and Ethiopia ( N  = 919) among adolescent girls. Descriptive statistics and logistic regression models were generated to examine associations between understanding of the survey and selected demographics in both countries. Overall, nearly 90 % of girls in the DRC felt that the questions were easy to understand as compared to approximately 75 % in Ethiopia. Level of education, but not age, was associated with understanding of the survey in both countries. Financial and time investment to ready ACASI was substantial in order to properly contextualize the approach to these specific humanitarian settings, including piloting of images, language assessments, and checking both written translations and corresponding verbal recordings. Despite challenges, we conclude that ACASI proved feasible and acceptable to participants and to data collection teams in two diverse humanitarian settings.

  5. The Effect of Mobile Tablet Computer (iPad) Implementation on Graduate Medical Education at a Multi-specialty Residency Institution

    Science.gov (United States)

    Dupaix, John; Chun, Maria BJ; Belcher, Gary F; Cheng, Yongjun; Atkinson, Robert

    2016-01-01

    Use of mobile tablet computers (MTCs) in residency education has grown. The objective of this study was to investigate the impact of MTCs on multiple specialties' residency training and identify MTC adoption impediments. To our knowledge, this current project is one of the first multispecialty studies of MTC implementation. A prospective cohort study was formulated. In June 2012 iPad2s were issued to all residents after completion of privacy/confidentiality agreements and a mandatory hard-copy pre-survey regarding four domains of usage (general, self-directed learning, clinical duties, and patient education). Residents who received iPads previously were excluded. A voluntary post-survey was conducted online in June 2013. One-hundred eighty-five subjects completed pre-survey and 107 completed post-survey (58% overall response rate). Eighty-six pre- and post-surveys were linked (response rate of 46%). There was a significant increase in residents accessing patient information/records and charting electronically (26.9% to 79.1%; Peducation, clinical practice, and patient education. The survey tool may be useful in collecting data on MTC use by other graduate medical education programs. PMID:27437163

  6. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    Science.gov (United States)

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  7. Computer Games as a Tool for Implementation of Memory Policy (on the Example of Displaying Events of The Great Patriotic War in Video Games

    Directory of Open Access Journals (Sweden)

    Сергей Игоревич Белов

    2018-12-01

    Full Text Available The presented work is devoted to the study of the practice of using computer games as a tool of the memory policy. The relevance of this study determines both the growth of the importance of video games as a means of forming ideas about the events of the past, and a low degree of study of this topic. As the goal of the author's research, the research is to identify the prospects for using computer games as an instrument for implementing the memory policy within the framework of the case of the events of the Great Patriotic War. The empirical base of the work was formed due to the generalization of the content of such video games as “Call of Duty 1”, “Call of Duty 14: WWII”, “Company of Heroes 2” and “Commandos 3: Destination Berlin”. The methodological base of the research is formed due to the involvement of elements of descriptive political analysis, the theory of operant conditioning B.F. Skinner and the concept of social identity H. Tajfel and J. Turner. The author comes to the conclusion that familiarization of users with the designated games contributes to the consolidation in the minds of users of negative stereotypes regarding the participation of the Red Army in the Great Patriotic War. The process of integration of negative images is carried out using the methods of operant conditioning. The integration of the system of negative images into the mass consciousness of the inhabitants of the post-Soviet space makes it difficult to preserve the remnants of Soviet political symbols and elements constructed on their basis identity. The author puts forward the hypothesis that in the case of complete desovietization of the public policy space in the states that emerged as a result of the collapse of the USSR, the task of revising the history of the Great Patriotic War will be greatly facilitated, and with the subsequent departure from the life of the last eyewitnesses of the relevant events, achieving this goal will be only a

  8. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  9. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Abel, R.

    2000-01-01

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  10. Comparison of Computer Based Instruction to Behavior Skills Training for Teaching Staff Implementation of Discrete-Trial Instruction with an Adult with Autism

    Science.gov (United States)

    Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…

  11. Quantum walk computation

    International Nuclear Information System (INIS)

    Kendon, Viv

    2014-01-01

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer

  12. Design, implementation, and testing of a software interface between the AN/SPS-65(V)1 radar and the SRC-6E reconfigurable computer

    OpenAIRE

    Guthrie, Thomas G.

    2005-01-01

    Approved for public release, distribution is unlimited This thesis outlines the development, programming, and testing a logical interface between a radar system, the AN/SPS-65(V)1, and a general-purpose reconfigurable computing platform, the SRC Computer, Inc. model, the SRC-6E. To confirm the proper operation of the interface and associated subcomponents, software was developed to perform basic radar signal processing. The interface, as proven by the signal processing results, accurately ...

  13. Ultra-low-energy three-dimensional oxide-based electronic synapses for implementation of robust high-accuracy neuromorphic computation systems.

    Science.gov (United States)

    Gao, Bin; Bi, Yingjie; Chen, Hong-Yu; Liu, Rui; Huang, Peng; Chen, Bing; Liu, Lifeng; Liu, Xiaoyan; Yu, Shimeng; Wong, H-S Philip; Kang, Jinfeng

    2014-07-22

    Neuromorphic computing is an attractive computation paradigm that complements the von Neumann architecture. The salient features of neuromorphic computing are massive parallelism, adaptivity to the complex input information, and tolerance to errors. As one of the most crucial components in a neuromorphic system, the electronic synapse requires high device integration density and low-energy consumption. Oxide-based resistive switching devices have been shown to be a promising candidate to realize the functions of the synapse. However, the intrinsic variation increases significantly with the reduced spike energy due to the reduced number of oxygen vacancies in the conductive filament region. The large resistance variation may degrade the accuracy of neuromorphic computation. In this work, we develop an oxide-based electronic synapse to suppress the degradation caused by the intrinsic resistance variation. The synapse utilizes a three-dimensional vertical structure including several parallel oxide-based resistive switching devices on the same nanopillar. The fabricated three-dimensional electronic synapse exhibits the potential for low fabrication cost, high integration density, and excellent performances, such as low training energy per spike, gradual resistance transition under identical pulse training scheme, and good repeatability. A pattern recognition computation is simulated based on a well-known neuromorphic visual system to quantify the feasibility of the three-dimensional vertical structured synapse for the application of neuromorphic computation systems. The simulation results show significantly improved recognition accuracy from 65 to 90% after introducing the three-dimensional synapses.

  14. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  15. Cognitive Computing for Security.

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rothganger, Fredrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marinella, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Brian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Warrender, Christina E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mickel, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

  16. Implementation is crucial but must be neurobiologically grounded. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    Science.gov (United States)

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L.

    2014-09-01

    From the perspective of language, Fitch's [1] claim that theories of cognitive computation should not be separated from those of implementation surely deserves applauding. Recent developments in the Cognitive Neuroscience of Language, leading to the new field of the Neurobiology of Language [2-4], emphasise precisely this point: rather than attempting to simply map cognitive theories of language onto the brain, we should aspire to understand how the brain implements language. This perspective resonates with many of the points raised by Fitch in his review, such as the discussion of unhelpful dichotomies (e.g., Nature versus Nurture). Cognitive dichotomies and debates have repeatedly turned out to be of limited usefulness when it comes to understanding language in the brain. The famous modularity-versus-interactivity and dual route-versus-connectionist debates are cases in point: in spite of hundreds of experiments using neuroimaging (or other techniques), or the construction of myriad computer models, little progress has been made in their resolution. This suggests that dichotomies proposed at a purely cognitive (or computational) level without consideration of biological grounding appear to be "asking the wrong questions" about the neurobiology of language. In accordance with these developments, several recent proposals explicitly consider neurobiological constraints while seeking to explain language processing at a cognitive level (e.g. [5-7]).

  17. Analysis, design, and implementation of PHENIX on-line computing systems software using Shlaer-Mellor object-oriented analysis and recursive design

    International Nuclear Information System (INIS)

    Kozlowski, T.; Desmond, E.; Haggerty, J.

    1997-01-01

    An early prototype of the core software for on-line computing systems for the PHENIX detector at RHIC has been developed using the Shlaer-Mellor OOA/RD method, including the automatic generation of C++ source code using a commercial translation engine and open-quotes architectureclose quotes

  18. Building Capacity Through Hands-on Computational Internships to Assure Reproducible Results and Implementation of Digital Documentation in the ICERT REU Program

    Science.gov (United States)

    Gomez, R.; Gentle, J.

    2015-12-01

    Modern data pipelines and computational processes require that meticulous methodologies be applied in order to insure that the source data, algorithms, and results are properly curated, managed and retained while remaining discoverable, accessible, and reproducible. Given the complexity of understanding the scientific problem domain being researched, combined with the overhead of learning to use advanced computing technologies, it becomes paramount that the next generation of scientists and researchers learn to embrace best-practices. The Integrative Computational Education and Research Traineeship (ICERT) is a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at the Texas Advanced Computing Center (TACC). During Summer 2015, two ICERT interns joined the 3DDY project. 3DDY converts geospatial datasets into file types that can take advantage of new formats, such as natural user interfaces, interactive visualization, and 3D printing. Mentored by TACC researchers for ten weeks, students with no previous background in computational science learned to use scripts to build the first prototype of the 3DDY application, and leveraged Wrangler, the newest high performance computing (HPC) resource at TACC. Test datasets for quadrangles in central Texas were used to assemble the 3DDY workflow and code. Test files were successfully converted into a stereo lithographic (STL) format, which is amenable for use with a 3D printers. Test files and the scripts were documented and shared using the Figshare site while metadata was documented for the 3DDY application using OntoSoft. These efforts validated a straightforward set of workflows to transform geospatial data and established the first prototype version of 3DDY. Adding the data and software management procedures helped students realize a broader set of tangible results (e.g. Figshare entries), better document their progress and the final state of their work for the research group and community

  19. Quantum computing with trapped ions

    International Nuclear Information System (INIS)

    Haeffner, H.; Roos, C.F.; Blatt, R.

    2008-01-01

    Quantum computers hold the promise of solving certain computational tasks much more efficiently than classical computers. We review recent experimental advances towards a quantum computer with trapped ions. In particular, various implementations of qubits, quantum gates and some key experiments are discussed. Furthermore, we review some implementations of quantum algorithms such as a deterministic teleportation of quantum information and an error correction scheme

  20. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  1. INREM II: a computer implementation of recent models for estimating the dose equivalent to organs of man from an inhaled or ingested radionuclide

    International Nuclear Information System (INIS)

    Killough, G.G.; Dunning, D.E. Jr.; Pleasant, J.C.

    1978-01-01

    This report describes a computer code, INREM II, which calculates the internal radiation dose equivalent to organs of man which results from the intake of a radionuclide by inhalation or ingestion. Deposition and removal of radioactivity from the respiratory tract is represented by the ICRP Task Group Lung Model. A four-segment catenary model of the GI tract is used to estimate movement of radioactive material that is ingested or swallowed after being cleared from the respiratory tract. Retention of radioactivity in other organs is specified by linear combinations of decaying exponential functions. The formation and decay of radioactive daughters is treated explicitly, with each radionuclide species in the chain having its own uptake and retention parameters, as supplied by the user. The dose equivalent to a target organ is computed as the sum of contributions from each source organ in which radioactivity is assumed to be situated. This calculation utilizes a matrix of S-factors (rem/μCi-day) supplied by the user for the particular choice of source and target organs. Output permits the evaluation of crossfire components of dose when penetrating radiations are present. INREM II is coded in FORTRAN IV and has been compiled and executed on an IBM-360 computer

  2. Analytic derivative couplings and first-principles exciton/phonon coupling constants for an ab initio Frenkel-Davydov exciton model: Theory, implementation, and application to compute triplet exciton mobility parameters for crystalline tetracene.

    Science.gov (United States)

    Morrison, Adrian F; Herbert, John M

    2017-06-14

    Recently, we introduced an ab initio version of the Frenkel-Davydov exciton model for computing excited-state properties of molecular crystals and aggregates. Within this model, supersystem excited states are approximated as linear combinations of excitations localized on molecular sites, and the electronic Hamiltonian is constructed and diagonalized in a direct-product basis of non-orthogonal configuration state functions computed for isolated fragments. Here, we derive and implement analytic derivative couplings for this model, including nuclear derivatives of the natural transition orbital and symmetric orthogonalization transformations that are part of the approximation. Nuclear derivatives of the exciton Hamiltonian's matrix elements, required in order to compute the nonadiabatic couplings, are equivalent to the "Holstein" and "Peierls" exciton/phonon couplings that are widely discussed in the context of model Hamiltonians for energy and charge transport in organic photovoltaics. As an example, we compute the couplings that modulate triplet exciton transport in crystalline tetracene, which is relevant in the context of carrier diffusion following singlet exciton fission.

  3. Teachers’ participation in professional development concerning the implementation of new technologies in class: a latent class analysis of teachers and the relationship with the use of computers, ICT self-efficacy and emphasis on teaching ICT skills

    Directory of Open Access Journals (Sweden)

    Kerstin Drossel

    2017-11-01

    Full Text Available Abstract The increasing availability of new technologies in an ever more digitalized world has gained momentum in practically all spheres of life, making technology-related skills a key competence not only in professional settings. Thus, schools assume responsibility for imparting these skills to their students, and hence to future generations of professionals. In so doing, teachers play a key role with their competences in using new technologies constituting an essential prerequisite for the effective implementation of such skills. As models of school development and school effectiveness found teacher professionalization to be a key element with regards to student achievement as well as teachers’ in-class use of new technology, the present research project conducts secondary analyses using data from the IEA International Computer and Information Literacy Study 2013 (ICILS 2013 regarding internal and external teacher professionalization. Particular emphasis is placed on the implementation of new technologies in class in a comparison between the education systems of Germany and the Czech Republic. A Latent Class Analysis serves the purpose of establishing a teacher typology with regards to technology-related professional development. This typology is subsequently used for further analyses of additional factors that show a correlation with the teachers’ use of computers in class. These include the teachers’ ICT self-efficacy and their emphasis on teaching ICT skills. The results show two different types of teachers across both countries. Teachers who participate in professional development use computers more frequently in class, put more emphasis on teaching ICT skills and have a stronger sense of ICT self-efficacy. When comparing teachers in Germany and the Czech Republic, teachers in Germany who participate in professional development consider themselves more ICT self-efficient, while teachers in the Czech Republic use computers more often

  4. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  5. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  6. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  7. Pilot implementation

    DEFF Research Database (Denmark)

    Hertzum, Morten; Bansler, Jørgen P.; Havn, Erling C.

    2012-01-01

    be difficult to plan and conduct. It is sometimes assumed that pilot implementations are less complicated and risky than ordinary implementations. Pilot implementations are, however, neither prototyping nor small-scale versions of full-scale implementations; they are fundamentally different and have their own...

  8. EDMS implementation challenge.

    Science.gov (United States)

    De La Torre, Marta

    2002-08-01

    The challenges faced by facilities wishing to implement an electronic medical record system are complex and overwhelming. Issues such as customer acceptance, basic computer skills, and a thorough understanding of how the new system will impact work processes must be considered and acted upon. Acceptance and active support are necessary from Senior Administration and key departments to enable this project to achieve measurable success. This article details one hospital's "journey" through design and successful implementation of an electronic medical record system.

  9. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  10. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  11. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  12. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  13. Improving radiation awareness and feeling of personal security of non-radiological medical staff by implementing a traffic light system in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Heilmaier, C.; Mayor, A.; Zuber, N.; Weishaupt, D. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Radiology; Fodor, P. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Anesthesiology and Intensive Care Medicine

    2016-03-15

    Non-radiological medical professionals often need to remain in the scanning room during computed tomography (CT) examinations to supervise patients in critical condition. Independent of protective devices, their position significantly influences the radiation dose they receive. The purpose of this study was to assess if a traffic light system indicating areas of different radiation exposure improves non-radiological medical staff's radiation awareness and feeling of personal security. Phantom measurements were performed to define areas of different dose rates and colored stickers were applied on the floor according to a traffic light system: green = lowest, orange = intermediate, and red = highest possible radiation exposure. Non-radiological medical professionals with different years of working experience evaluated the system using a structured questionnaire. Kruskal-Wallis and Spearman's correlation test were applied for statistical analysis. Fifty-six subjects (30 physicians, 26 nursing staff) took part in this prospective study. Overall rating of the system was very good, and almost all professionals tried to stand in the green stickers during the scan. The system significantly increased radiation awareness and feeling of personal protection particularly in staff with ? 5 years of working experience (p < 0.05). The majority of non-radiological medical professionals stated that staying in the green stickers and patient care would be compatible. Knowledge of radiation protection was poor in all groups, especially among entry-level employees (p < 0.05). A traffic light system in the CT scanning room indicating areas with lowest, in-termediate, and highest possible radiation exposure is much appreciated. It increases radiation awareness, improves the sense of personal radiation protection, and may support endeavors to lower occupational radiation exposure, although the best radiation protection always is to re-main outside the CT room during the scan.

  14. Vectorization, parallelization and implementation of nuclear codes =MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3= on the VPP500 computer system. Progress report 1995 fiscal year

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo [Fujitsu Ltd., Tokyo (Japan); Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru

    1996-06-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  15. Quantum mechanics and computation

    International Nuclear Information System (INIS)

    Cirac Sasturain, J. I.

    2000-01-01

    We review how some of the basic principles of Quantum Mechanics can be used in the field of computation. In particular, we explain why a quantum computer can perform certain tasks in a much more efficient way than the computers we have available nowadays. We give the requirements for a quantum system to be able to implement a quantum computer and illustrate these requirements in some particular physical situations. (Author) 16 refs

  16. Implementation Politics

    DEFF Research Database (Denmark)

    Hegland, Troels Jacob; Raakjær, Jesper

    2008-01-01

    ABSTRACT: Denmark is among the more loyal European Union (EU) member states when it comes to national implementation of the Common Fisheries Policy (CFP). However, even in Denmark several mechanisms contribute to sub-optimal implementation of the CFP. Looking at implementation problems for a rela......ABSTRACT: Denmark is among the more loyal European Union (EU) member states when it comes to national implementation of the Common Fisheries Policy (CFP). However, even in Denmark several mechanisms contribute to sub-optimal implementation of the CFP. Looking at implementation problems...... for a relatively loyal member state, this chapter sheds critical light on national implementation of the CFP in the EU as a whole. The chapter initially provides a description of the institutional set-up for fisheries policy-making and implementation in Denmark, including a short historical account....../networks and prevailing discourses. The inability of the EU to ensure that the conservation goals agreed at the EU level are loyally pursued during national implementation is one of the reasons why the EU has been struggling to keep fishing mortality rates at a sustainable level....

  17. Vertical Implementation

    NARCIS (Netherlands)

    Rensink, Arend; Gorrieri, Roberto

    2001-01-01

    We investigate criteria to relate specifications and implementations belonging to conceptually different levels of abstraction. For this purpose, we introduce the generic concept of a vertical implementation relation, which is a family of binary relations indexed by a refinement function that maps

  18. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  19. Implementing and testing program PLOTTAB

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1988-01-01

    Enclosed is a description of the magnetic tape or floppy diskette containing the PLOTTAB code package. In addition detailed information is provided on implementation and testing of this code. See part I for mainframe computers; part II for personal computers. These codes are documented in IAEA-NDS-82. (author)

  20. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  1. The Stevens Personal Computer Plan.

    Science.gov (United States)

    Friedman, Edward A.; Moeller, Joseph J., Jr.

    1984-01-01

    Describes evolution, implementation, and development of a personal computer plan at Stevens Institute of Technology (New Jersey). Although Stevens was the first college to establish a personal computer requirement, the core curriculum could not accommodate additional computing courses. Therefore, computing was integrated throughout the entire…

  2. Treaty implementation

    International Nuclear Information System (INIS)

    Dunn, L.A.

    1990-01-01

    This paper touches on three aspects of the relationship between intelligence and treaty implementation, a two-way association. First the author discusses the role of intelligence as a basis for compliance monitoring and treaty verification. Second the authors discusses payoffs of intelligence gathering and the intelligence process of treaty implementation, in particular on-site inspection. Third, the author goes in another direction and discusses some of the tensions between the intelligence gathering and treaty-implementation processes, especially with regard to extensive use of on-site inspection, such as we are likely to see in monitoring compliance of future arms control treaties

  3. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  4. Efficient Computer Implementations of Fast Fourier Transforms.

    Science.gov (United States)

    1980-12-01

    6 26 9 12 1 44 16 10 8 74 119 P or both al eorithr" :-tructure the 5: mal -N o ua 1ions ;:r: tx’ . : r, only tI. r:i2 I. t t - i,n i., ,iffc ,n t. In...remor’: was not inc11 udcti i!n th( t I on S for comparison because program memory required depnclds on the machine word size. The :rogram menor

  5. Implementation of the Kids-CAT in clinical settings: a newly developed computer-adaptive test to facilitate the assessment of patient-reported outcomes of children and adolescents in clinical practice in Germany.

    Science.gov (United States)

    Barthel, D; Fischer, K I; Nolte, S; Otto, C; Meyrose, A-K; Reisinger, S; Dabs, M; Thyen, U; Klein, M; Muehlan, H; Ankermann, T; Walter, O; Rose, M; Ravens-Sieberer, U

    2016-03-01

    To describe the implementation process of a computer-adaptive test (CAT) for measuring health-related quality of life (HRQoL) of children and adolescents in two pediatric clinics in Germany. The study focuses on the feasibility and user experience with the Kids-CAT, particularly the patients' experience with the tool and the pediatricians' experience with the Kids-CAT Report. The Kids-CAT was completed by 312 children and adolescents with asthma, diabetes or rheumatoid arthritis. The test was applied during four clinical visits over a 1-year period. A feedback report with the test results was made available to the pediatricians. To assess both feasibility and acceptability, a multimethod research design was used. To assess the patients' experience with the tool, the children and adolescents completed a questionnaire. To assess the clinicians' experience, two focus groups were conducted with eight pediatricians. The children and adolescents indicated that the Kids-CAT was easy to complete. All pediatricians reported that the Kids-CAT was straightforward and easy to understand and integrate into clinical practice; they also expressed that routine implementation of the tool would be desirable and that the report was a valuable source of information, facilitating the assessment of self-reported HRQoL of their patients. The Kids-CAT was considered an efficient and valuable tool for assessing HRQoL in children and adolescents. The Kids-CAT Report promises to be a useful adjunct to standard clinical care with the potential to improve patient-physician communication, enabling pediatricians to evaluate and monitor their young patients' self-reported HRQoL.

  6. Pilot Implementations

    DEFF Research Database (Denmark)

    Manikas, Maria Ie

    This PhD dissertation engages in the study of pilot (system) implementation. In the field of information systems, pilot implementations are commissioned as a way to learn from real use of a pilot system with real data, by real users during an information systems development (ISD) project and before...... the final system is implemented. Among others, their use is argued to investigate the fit between the technical design and the organisational use. But what is a pilot implementation really? In this dissertation, I set out to address this conceptual question. I initially investigate this question....... The analysis is conducted by means of a theoretical framework that centres on the concept infrastructure. With infrastructure I understand the relation between organised practice and the information systems supporting this practice. Thus, infrastructure is not a thing but a relational and situated concept...

  7. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  8. A Comparison of Ellipse-Fitting Techniques for Two and Three-Dimensional Strain Analysis, and Their Implementation in an Integrated Computer Program Designed for Field-Based Studies

    Science.gov (United States)

    Vollmer, F. W.

    2010-12-01

    A new computer program, EllipseFit 2, was developed to implement computational and graphical techniques for two and three-dimensional geological finite strain analysis. The program includes an integrated set of routines to derive three-dimensional strain from oriented digital photographs, with a graphical interface suitable for field-based structural studies. The intuitive interface and multi-platform deployment make it useful for structural geology teaching laboratories as well (the program is free). Images of oriented sections are digitized using center-point, five-point ellipse, or n-point polygon moment-equivalent ellipse fitting. The latter allows strain calculation from irregular polygons with sub-pixel accuracy (Steger, 1996; Mulchrone and Choudhury, 2004). Graphical strain ellipse techniques include center-to-center methods (Fry, 1979; Erslev, 1988; Erslev and Ge, 1990), with manual and automatic n-point ellipse-fitting. Graphical displays include axial length graphs, Rf/Φ graphs (Dunnet, 1969), logarithmic and hyperbolic polar graphs (Elliott, 1970; Wheeler, 1984) with automatic contouring, and strain maps. Best-fit ellipse calculations include harmonic and circular means, and eigenvalue (Shimamoto and Ikeda, 1976) and mean radial length (Mulchrone et al., 2003) shape-matrix calculations. Shape-matrix error analysis is done analytically (Mulchrone, 2005) and using bootstrap techniques (Efron, 1979). The initial data set can be unstrained to check variation in the calculated pre-strain fabric. Fitting of ellipse-section data to a best-fit ellipsoid (b*) is done using the shape-matrix technique of Shan (2008). Error analysis is done by calculating the section ellipses of b*, and comparing the misfits between calculated and observed section ellipses. Graphical displays of ellipsoid data include axial-ratio (Flinn, 1962) and octahedral strain magnitude (Hossack, 1968) graphs. Calculations were done to test and compare computational techniques. For two

  9. Research in computer science

    Science.gov (United States)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  10. Elusive Implementation

    DEFF Research Database (Denmark)

    Heering Holt, Ditte; Rod, Morten Hulvej; Waldorff, Susanne Boch

    2018-01-01

    process of intersectoral policymaking in order to gain a better understanding of the challenges posed by implementation. To help conceptualize the process, we apply the theoretical perspective of organizational neo-institutionalism, in particular the concepts of rationalized myth and decoupling. Methods......: On the basis of an explorative study among ten Danish municipalities, we conducted an ethnographic study of the development of a municipal-wide implementation strategy for the intersectoral health policy of a medium-sized municipality. The main data sources consist of ethnographic field notes from participant...... in health. However, despite growing support for intersectoral policymaking, implementation remains a challenge. Critics argue that public health has remained naïve about the policy process and a better understanding is needed. Based on ethnographic data, this paper conducts an in-depth analysis of a local...

  11. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  12. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  13. Implementing Pseudonymity

    Directory of Open Access Journals (Sweden)

    Miranda Mowbray

    2006-03-01

    Full Text Available I will give an overview of some technologies that enable pseudonymity - allowing individuals to reveal or prove information about themselves to others without revealing their full identity. I will describe some functionalities relating to pseudonymity that can be implemented, and some that cannot. My intention is to present enough of the mathematics that underlies technology for pseudonymity to show that it is indeed possible to implement some functionalities that at first glance may appear impossible. In particular, I will show that several of the intended functions of the UK national ID could be provided in a pseudonymous fashion, allowing greater privacy. I will also outline some technology developed at HP Labs which ensures that users’ personal data is released only to software that has been checked to conform to their preferred privacy policies.

  14. Offline computing and networking

    International Nuclear Information System (INIS)

    The crucial ingredient in the model of a central computing facility for the SSC, is the reliance on cheap farms of microprocessors for most of the computing needs. It is clear that without the implementation of such farms, either within high energy physics or by industrial sources, SSC computing cannot be done without an enormous and unacceptable increase in the cost. We must have both the hardware and software ability to make microprocessor farms work. The other components of the model appear to be well within reasonable extrapolations of today's computing related technology

  15. Introduction to quantum computers

    CERN Document Server

    Berman, Gennady P; Mainieri, Ronnie; Tsifrinovich, Vladimir I

    1998-01-01

    Quantum computing promises to solve problems which are intractable on digital computers. Highly parallel quantum algorithms can decrease the computational time for some problems by many orders of magnitude. This important book explains how quantum computers can do these amazing things. Several algorithms are illustrated: the discrete Fourier transform, Shor’s algorithm for prime factorization; algorithms for quantum logic gates; physical implementations of quantum logic gates in ion traps and in spin chains; the simplest schemes for quantum error correction; correction of errors caused by im

  16. Software For Computing Selected Functions

    Science.gov (United States)

    Grant, David C.

    1992-01-01

    Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.

  17. Implementation of a cluster Beowulf

    International Nuclear Information System (INIS)

    Victorino Guzman, Jorge Enrique

    2001-01-01

    One of the simulation systems that put a great stress on computational resources and performance are the climatic models, with a high cost of implementation, making difficult its acquisition. An alternative that offers good performance at a reasonable cost is the construction of Cluster Beowulf that allows to emulate the behaviour of a computer with several processors. In the present article we discuss the requirements of hardware for the construction of the Cluster Beowulf, the software resources for the implementation of the model CCM3.6 and the performance of the Cluster Beowulf, of the Group of Investigation in Meteorology at the National University of Colombia, with different number of processors

  18. Initial findings from a mixed-methods evaluation of computer-assisted therapy for substance misuse in prisoners: Development, implementation and clinical outcomes from the ‘Breaking Free Health & Justice’ treatment and recovery programme

    Directory of Open Access Journals (Sweden)

    Sarah Elison

    2015-08-01

    Full Text Available Background: Within the United Kingdom’s ‘Transforming Rehabilitation’ agenda, reshaping drug and alcohol interventions in prisons is central to the Government’s approach to addressing substance dependence in the prison population and reduce reoffending. To achieve this, a through-care project to support offenders following release, ‘Gateways’, is taking place providing ‘through the gate’ support to released offenders, including help with organising accommodation, education and employment, and access to a peer supporter. In addition, Gateways is providing access to an evidence-based computer-assisted therapy (CAT programme for substance misuse, Breaking Free Health & Justice (BFHJ. Developed in partnership with the Ministry of Justice (MoJ National Offender Management Services (NOMS, and based on a community version of the programme, Breaking Free Online (BFO, BFHJ provides access to clinically-robust techniques based on cognitive behavioural therapy (CBT and promotes the role of technology-enhanced approaches in recovery from substance misuse. The BFHJ programme is provided via ‘Virtual Campus’ (VC, a secure, web-based learning environment delivered by NOMS and the Department for Business, Innovation and Skills, which has no links to websites not approved by MoJ, and provides prisoners with access to online training courses around work and skills. Providing BFHJ on VC makes the programme the world’s first online healthcare programme to be provided in prisons. Aims: Although here is an emerging evidence-base for the effectiveness of the community version of the BFO programme and its implementation within community treatment settings (Davies, Elison, Ward, & Laudet, 2015; Elison, Davies, & Ward, 2015a, 2015b; Elison, Humphreys, Ward, & Davies, 2013; Elison, Ward, Davies, Lidbetter, et al., 2014; Elison, Ward, Davies, & Moody, 2014, its potential within prison settings requires exploration. This study therefore sought to

  19. Cloud Computing Governance Lifecycle

    Directory of Open Access Journals (Sweden)

    Soňa Karkošková

    2016-06-01

    Full Text Available Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is unclear how to achieve them. Cloud computing governance helps to create business value through obtain benefits from use of cloud computing services while optimizing investment and risk. Challenge, which organizations are facing in relation to governing of cloud services, is how to design and implement cloud computing governance to gain expected benefits. This paper aims to provide guidance on implementation activities of proposed Cloud computing governance lifecycle from cloud consumer perspective. Proposed model is based on SOA Governance Framework and consists of lifecycle for implementation and continuous improvement of cloud computing governance model.

  20. Wavelets and Wavelet Packets on Quantum Computers

    OpenAIRE

    Klappenecker, Andreas

    1999-01-01

    We show how periodized wavelet packet transforms and periodized wavelet transforms can be implemented on a quantum computer. Surprisingly, we find that the implementation of wavelet packet transforms is less costly than the implementation of wavelet transforms on a quantum computer.

  1. Defects in Quantum Computers.

    Science.gov (United States)

    Gardas, Bartłomiej; Dziarmaga, Jacek; Zurek, Wojciech H; Zwolak, Michael

    2018-03-14

    The shift of interest from general purpose quantum computers to adiabatic quantum computing or quantum annealing calls for a broadly applicable and easy to implement test to assess how quantum or adiabatic is a specific hardware. Here we propose such a test based on an exactly solvable many body system-the quantum Ising chain in transverse field-and implement it on the D-Wave machine. An ideal adiabatic quench of the quantum Ising chain should lead to an ordered broken symmetry ground state with all spins aligned in the same direction. An actual quench can be imperfect due to decoherence, noise, flaws in the implemented Hamiltonian, or simply too fast to be adiabatic. Imperfections result in topological defects: Spins change orientation, kinks punctuating ordered sections of the chain. The number of such defects quantifies the extent by which the quantum computer misses the ground state, and is, therefore, imperfect.

  2. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  3. Formal Methods for Information Protection Technology. Task 2: Mathematical Foundations, Architecture and Principles of Implementation of Multi-Agent Learning Components for Attack Detection in Computer Networks. Part 1

    National Research Council Canada - National Science Library

    Kotenko, I

    2003-01-01

    .... Integrity, confidentiality and availability of the network resources must be assured. To detect and suppress different types of computer unauthorized intrusions, modern network security systems (NSS...

  4. Computational differential topology

    Directory of Open Access Journals (Sweden)

    Denis Blackmore

    2007-04-01

    Full Text Available Some of the more differential aspects of the nascent field of computational topology are introduced and treated in considerable depth. Relevant categories based upon stratified geometric objects are proposed, and fundamental problems are identified and discussed in the context of both differential topology and computer science. New results on the triangulation of objects in the computational differential categories are proven, and evaluated from the perspective of effective computability (algorithmic solvability. In addition, the elements of innovative, effectively computable approaches for analyzing and obtaining computer generated representations of geometric objects based upon singularity/stratification theory and obstruction theory are formulated. New methods for characterizing complicated intersection sets are proven using differential analysis and homology theory. Also included are brief descriptions of several implementation aspects of some of the approaches described, as well as applications of the results in such areas as virtual sculpting, virtual surgery, modeling of heterogeneous biomaterials, and high speed visualizations.

  5. Computability in HOL

    DEFF Research Database (Denmark)

    Hougaard, Ole Ildsgaard

    1994-01-01

    This paper describes the implementation of a formal model for computability theory in the logical system HOL. The computability is modeled through an imperative language formally defined with the use of the Backus-Naur form and natural semantics. I will define the concepts of computable functions......, recursive sets, and decidable or partially decidable predicates, and show how they relate to each other. A central subject will be that of the use of only a finite amount of space by any single statement. This leads to a theorem about the computability of the composition of computable functions. The report...... will then evolve in two directions: The first subject is the reduction of recursive sets, leading to the unsolvability of the halting problem. The other is two general results of computability theory: The s-m-n theorem and Kleene's version of the 2nd recursion theorem. The use of the HOL system implies...

  6. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  7. Computer assisted roentgenology

    International Nuclear Information System (INIS)

    Trajkova, N.; Velkova, K.

    1999-01-01

    This is a report on the potentials and superiorities of computer tomography (CT), assumed as an up-to-date imaging examination method in medicine. The current trend in the development of computer assisted roentgenology consists in the implementation of new computer and communication systems promoting diagnostic and therapeutic activities. CT-study application is discussed with special reference to diagnosis and treatment of brain, lung, mediastinal and abdominal diseases. The new trends in the particular implementation of CT are presented, namely: CT-assisted biopsy, CT-assisted abscess drainage, drug administration under CT control, as well as the wide use of CT in orthopaedic surgery, otorinolaryngology etc. Also emphasis is laid on the important role played by three-dimensional technologies in computer-assisted surgery, leading to qualitatively new stage in the surgical therapeutic approach to patients

  8. Implementation of computational model for the evaluation of electromagnetic susceptibility of the cables for communication and control of high voltage substations; Implementacao de modelo computacional para a avaliacao da suscetibilidade eletromagnetica dos cabos de comunicacao e controle de subestacoes de alta tensao

    Energy Technology Data Exchange (ETDEWEB)

    Sartin, Antonio C.P. [Companhia de Transmissao de Energia Eletrica Paulista (CTEEP), Bauru, SP (Brazil); Dotto, Fabio R.L.; Sant' Anna, Cezar J.; Thomazella, Rogerio [Fundacao para o Desenvolvimento de Bauru, SP (Brazil); Ulson, Jose A.C.; Aguiar, Paulo R. de [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Bauru, SP (Brazil)

    2009-07-01

    This work show the implementation of a electromagnetic model for supervision cable, protection, communication and high voltage substations control that was investigated in literature and adapted. The model was implemented by using a computational tool in order to obtain the electromagnetic behavior of various cables used in CTEEP substation, subject to several sources of electromagnetic interference in this inhospitable environment, such as lightning strikes, outbreaks of maneuvers switching and the corona effect. The results obtained in computer simulations were compared with results of laboratory tests carried out on a lot of cables that represent those systems that are present in substations 440 kV. This study characterized the electromagnetic interference, ranked them, identified possible susceptible points in the substation, which contributed to the development of a technical procedure that minimizes unwanted effects caused in communication systems and substation control. This developed procedure also assured the maximum reliability and availability in the operation of the electrical power system to the company.

  9. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  10. ENS implementation

    Energy Technology Data Exchange (ETDEWEB)

    Teodorescu, R.; Blaabjerg, F.; Asiminoaei, L.; Timbus, A.V.

    2004-07-01

    This report is part of the research contract PSO-Eltra 2524/2003 where Aalborg University did cooperated with PowerLynx A/S for the development of the ENS function for the PowerLynx PV inverters. The work started 01.01.03 and ended 29.02.04 and involved Frede Blaabjerg, Remus Teodorescu, Lucian Asiminoaie and Adrian Timbus from Aalborg University and Uffe Borup from PowerLynx A/S. The objective was to make the PowerLynx PV inverters compatible with the german ENS standard which mainly consist in the requirement to disconnect from the grid in max. 2 seconds if the grid impedance is changed with 0.5 ohms resistive. During 01.07 31.08 2003, an algorithm based on the injection of a voltage of 75 Hz and the on-line measurement the 75Hz component from the grid current and voltage using simplified DFT was developed by Uffe Borup, Remus Teodorescu and Lucian Asiminoaie. The algorithm is described in detail in the report 'ENS design' enclosed in Appendix A and in the two published paper (final paper for APEC'04 and approved digest for PESC'04). Then the algorithm was finally implemented and the experimental tests carried out proved to be satisfactory as shown in chapter 2 'ENS implementation'. Very few missed tests were recorded on very high inductive grid (0.6 + 1.1i) when the reactance is higher than the resistance. In the period 03.02.04 13.02.04 some attempts was done in order to improve the detection by compensating the delay in the voltage measurement but no improvement was noticed. Thus, these last minute changes was not included in the final version of the software. Finally, the detection mechanism of another PV running in parallel that injects t5Hz for ENS grid measurements has been done. The code for parallel ENS is also enclosed. (au)

  11. Overview of real-time computer systems technical analysis of the Modcomp implementation of a proprietary system MAX IV'' and real-time UNIX system REAL/IX''

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, J.

    1990-10-01

    There many applications throughout industry and government requiring real-time computing. Any application that monitors and/or controls a process would fit into this category. Some examples are: Nuclear power plants, Steel mills, Space program, etc. General Atomics uses eight real-time computer systems for control and high speed data acquisition required to run the nuclear fusion experiments. Real-Time computing can be defined as the ability to respond to asynchronous external events in a predictable (preferably fast) time frame. Real-Time computer systems are similar to other computers in many ways and may by used for general computing requirements such as Time-Sharing. However special hardware, operating systems and software had to be developed to meet the requirement for real-time computing. Traditionally, real-time computing has been a realm of proprietary operating systems with real-time applications written in FORTRAN and assembly language. In the past, these systems adequately served the needs of the real-time world. Many of these systems that were developed 15 years ago are still being used today. However the real-time world is now changing, demanding new systems to be developed. This paper gives a description of general real-time computer systems and how they differ from other systems. However, the main purpose of this paper is to give a detailed technical description of the hardware and operating systems of an existing proprietary system and a real-time UNIX system. The two real-time computer systems described in detail are Modcomp Classic III/95 with the MAX IV operating system and Modcomp TRI-D 9750 with the REAL/IX.2 operating system.

  12. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    A computing grid interconnects resources such as high per- formance computers, scientific databases, and computer- controlled scientific instruments of cooperating organiza- tions each of which is autonomous. It precedes and is quite different from cloud computing, which provides computing resources by vendors to ...

  13. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  14. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  15. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  16. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  17. Computing with synthetic protocells.

    Science.gov (United States)

    Courbet, Alexis; Molina, Franck; Amar, Patrick

    2015-09-01

    In this article we present a new kind of computing device that uses biochemical reactions networks as building blocks to implement logic gates. The architecture of a computing machine relies on these generic and composable building blocks, computation units, that can be used in multiple instances to perform complex boolean functions. Standard logical operations are implemented by biochemical networks, encapsulated and insulated within synthetic vesicles called protocells. These protocells are capable of exchanging energy and information with each other through transmembrane electron transfer. In the paradigm of computation we propose, protoputing, a machine can solve only one problem and therefore has to be built specifically. Thus, the programming phase in the standard computing paradigm is represented in our approach by the set of assembly instructions (specific attachments) that directs the wiring of the protocells that constitute the machine itself. To demonstrate the computing power of protocellular machines, we apply it to solve a NP-complete problem, known to be very demanding in computing power, the 3-SAT problem. We show how to program the assembly of a machine that can verify the satisfiability of a given boolean formula. Then we show how to use the massive parallelism of these machines to verify in less than 20 min all the valuations of the input variables and output a fluorescent signal when the formula is satisfiable or no signal at all otherwise.

  18. Computing Thermodynamic And Transport Properties Of Air

    Science.gov (United States)

    Thompson, Richard A.; Gupta, Roop N.; Lee, Kam-Pui

    1994-01-01

    EQAIRS computer program is set of FORTRAN 77 routines for computing thermodynamic and transport properties of equilibrium air for temperatures from 100 to 30,000 K. Computes properties from 11-species, curve-fit mathematical model. Successfully implemented on DEC VAX-series computer running VMS, Sun4-series computer running SunOS, and IBM PC-compatible computer running MS-DOS.

  19. Computational Ocean Acoustics

    CERN Document Server

    Jensen, Finn B; Porter, Michael B; Schmidt, Henrik

    2011-01-01

    Since the mid-1970s, the computer has played an increasingly pivotal role in the field of ocean acoustics. Faster and less expensive than actual ocean experiments, and capable of accommodating the full complexity of the acoustic problem, numerical models are now standard research tools in ocean laboratories. The progress made in computational ocean acoustics over the last thirty years is summed up in this authoritative and innovatively illustrated new text. Written by some of the field's pioneers, all Fellows of the Acoustical Society of America, Computational Ocean Acoustics presents the latest numerical techniques for solving the wave equation in heterogeneous fluid–solid media. The authors discuss various computational schemes in detail, emphasizing the importance of theoretical foundations that lead directly to numerical implementations for real ocean environments. To further clarify the presentation, the fundamental propagation features of the techniques are illustrated in color. Computational Ocean A...

  20. CAAD: Computer Architecture for Autonomous Driving

    OpenAIRE

    Liu, Shaoshan; Tang, Jie; Zhang, Zhe; Gaudiot, Jean-Luc

    2017-01-01

    We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.

  1. Computing Tropical Varieties

    DEFF Research Database (Denmark)

    Speyer, D.; Jensen, Anders Nedergaard; Bogart, T.

    2005-01-01

    The tropical variety of a d-dimensional prime ideal in a polynomial ring with complex coefficients is a pure d-dimensional polyhedral fan. This fan is shown to be connected in codimension one. We present algorithmic tools for computing the tropical variety, and we discuss our implementation...

  2. COMPUTER SUPPORT MANAGEMENT PRODUCTION

    Directory of Open Access Journals (Sweden)

    Svetlana Trajković

    2014-10-01

    Full Text Available The modern age in which we live today, modern and highly advanced technology that follows us all, gives great importance in the management of production within the computer support of management. Computer applications in production, the organization of production systems, in the organization of management and business, is gaining in importance. We live in a time when more and more uses computer technology and thus gives the opportunity for a broad and important area of application of computer systems in production, as well as some methods that enable us to successful implementation of a computer, such as in the management of production. Computer technology speeds up the processing and transfer of Information By needed in decision-making at various levels of management. Computer applications in production management and organizational management business production system gets more and more. New generation of computers caused the first technological revolution in industry. On these solutions the industry has been able to use all the modern technology of computers in manufacturing, automation and production management .

  3. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    Science.gov (United States)

    Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom

    2004-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  4. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  5. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  6. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  7. Crypto-Verifying Protocol Implementations in ML

    NARCIS (Netherlands)

    Bhargavan, K.; Corin, R.J.; Fournet, C.

    2007-01-01

    We intend to narrow the gap between concrete implementations and verified models of cryptographic protocols. We consider protocols implemented in F#, a variant of ML, and verified using CryptoVerif, Blanchet's protocol verifier for computational cryptography. We experiment with compilers from F#

  8. Comparison of Orthogonal Matching Pursuit Implementations

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll

    2012-01-01

    We study the numerical and computational performance of three implementations of orthogonal matching pursuit: one using the QR matrix decomposition, one using the Cholesky matrix decomposition, and one using the matrix inversion lemma. We find that none of these implementations suffer from numeri...

  9. RELAP4/MOD5: a computer program for transient thermal-hydraulic analysis of nuclear reactors and related systems. User's manual. Volume II. Program implementation. [PWR and BWR

    Energy Technology Data Exchange (ETDEWEB)

    None

    1976-09-01

    This portion of the RELAP4/MOD5 User's Manual presents the details of setting up and entering the reactor model to be evaluated. The input card format and arrangement is presented in depth, including not only cards for data but also those for editing and restarting. Problem initalization including pressure distribution and energy balance is discussed. A section entitled ''User Guidelines'' is included to provide modeling recommendations, analysis and verification techniques, and computational difficulty resolution. The section is concluded with a discussion of the computer output form and format.

  10. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Computer Timetabling and Curriculum Planning.

    Science.gov (United States)

    Zarraga, M. N.; Bates, S.

    1980-01-01

    A Manchester, England, high school designed lower school curriculum structures via computer and investigated their feasibility using the Nor Data School Scheduling System. The positive results suggest that the computer system could provide all schools with an invaluable aid to the planning and implementation of their curriculum. (CT)

  13. Physical computation and cognitive science

    CERN Document Server

    Fresco, Nir

    2014-01-01

    This book presents a study of digital computation in contemporary cognitive science. Digital computation is a highly ambiguous concept, as there is no common core definition for it in cognitive science. Since this concept plays a central role in cognitive theory, an adequate cognitive explanation requires an explicit account of digital computation. More specifically, it requires an account of how digital computation is implemented in physical systems. The main challenge is to deliver an account encompassing the multiple types of existing models of computation without ending up in pancomputationalism, that is, the view that every physical system is a digital computing system. This book shows that only two accounts, among the ones examined by the author, are adequate for explaining physical computation. One of them is the instructional information processing account, which is developed here for the first time.   “This book provides a thorough and timely analysis of differing accounts of computation while adv...

  14. Computer security engineering management

    International Nuclear Information System (INIS)

    McDonald, G.W.

    1988-01-01

    For best results, computer security should be engineered into a system during its development rather than being appended later on. This paper addresses the implementation of computer security in eight stages through the life cycle of the system; starting with the definition of security policies and ending with continuing support for the security aspects of the system throughout its operational life cycle. Security policy is addressed relative to successive decomposition of security objectives (through policy, standard, and control stages) into system security requirements. This is followed by a discussion of computer security organization and responsibilities. Next the paper directs itself to analysis and management of security-related risks, followed by discussion of design and development of the system itself. Discussion of security test and evaluation preparations, and approval to operate (certification and accreditation), is followed by discussion of computer security training for users is followed by coverage of life cycle support for the security of the system

  15. Minimal ancilla mediated quantum computation

    International Nuclear Information System (INIS)

    Proctor, Timothy J.; Kendon, Viv

    2014-01-01

    Schemes of universal quantum computation in which the interactions between the computational elements, in a computational register, are mediated by some ancillary system are of interest due to their relevance to the physical implementation of a quantum computer. Furthermore, reducing the level of control required over both the ancillary and register systems has the potential to simplify any experimental implementation. In this paper we consider how to minimise the control needed to implement universal quantum computation in an ancilla-mediated fashion. Considering computational schemes which require no measurements and hence evolve by unitary dynamics for the global system, we show that when employing an ancilla qubit there are certain fixed-time ancilla-register interactions which, along with ancilla initialisation in the computational basis, are universal for quantum computation with no additional control of either the ancilla or the register. We develop two distinct models based on locally inequivalent interactions and we then discuss the relationship between these unitary models and the measurement-based ancilla-mediated models known as ancilla-driven quantum computation. (orig.)

  16. Interfacing the Paramesh Computational Libraries to the Cactus Computational Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and implement an interface between the Paramesh computational libraries, developed and used by groups at NASA GSFC, and the Cactus computational...

  17. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  18. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    The problematic addressed in the dissertation is generally shaped by a sensation that something is amiss within the area of Ubiquitous Computing. Ubiquitous Computing as a vision—as a program—sets out to challenge the idea of the computer as a desktop computer and to explore the potential...... of the new microprocessors and network technologies. However, the understanding of the computer represented within this program poses a challenge for the intentions of the program. The computer is understood as a multitude of invisible intelligent information devices which confines the computer as a tool...... to solve well-defined problems within specified contexts—something that rarely exists in practice. Nonetheless, the computer will continue to grow more ubiquitous as moore's law still apply and as its components become ever cheaper. The question is how, and for what we will use it? How will it...

  19. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  20. Optimization of an interactive distributive computer network

    Science.gov (United States)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  1. Development and computer implementation of design/analysis techniques for multilayered composite structures. Probabilistic fiber composite micromechanics. M.S. Thesis, Mar. 1987 Final Report, 1 Sep. 1984 - 1 Oct. 1990

    Science.gov (United States)

    Stock, Thomas A.

    1995-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intraply level, and the related effects of these on composite properties.

  2. Computer implemented land cover classification using LANDSAT MSS digital data: A cooperative research project between the National Park Service and NASA. 3: Vegetation and other land cover analysis of Shenandoah National Park

    Science.gov (United States)

    Cibula, W. G.

    1981-01-01

    Four LANDSAT frames, each corresponding to one of the four seasons were spectrally classified and processed using NASA-developed computer programs. One data set was selected or two or more data sets were marged to improve surface cover classifications. Selected areas representing each spectral class were chosen and transferred to USGS 1:62,500 topographic maps for field use. Ground truth data were gathered to verify the accuracy of the classifications. Acreages were computed for each of the land cover types. The application of elevational data to seasonal LANDSAT frames resulted in the separation of high elevation meadows (both with and without recently emergent perennial vegetation) as well as areas in oak forests which have an evergreen understory as opposed to other areas which do not.

  3. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  4. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  5. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  6. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  7. Grid Computing

    CERN Document Server

    Yen, Eric

    2008-01-01

    Based on the Grid Computing: International Symposium on Grid Computing (ISGC) 2007, held in Taipei, Taiwan in March of 2007, this title presents the grid solutions and research results in grid operations, grid middleware, biomedical operations, and e-science applications. It is suitable for graduate-level students in computer science.

  8. Optical Computing

    Indian Academy of Sciences (India)

    Optics has been used in computing for a number of years but the main emphasis has been and continues to be to link portions of computers, for communications, or more intrin- sically in devices that have some optical application or component (optical pattern recognition, etc). Optical digi- tal computers are still some years ...

  9. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  10. Roadmap for Peridynamic Software Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The application of peridynamics for engineering analysis requires an efficient and robust software implementation. Key elements include processing of the discretization, the proximity search for identification of pairwise interactions, evaluation of the con- stitutive model, application of a bond-damage law, and contact modeling. Additional requirements may arise from the choice of time integration scheme, for example esti- mation of the maximum stable time step for explicit schemes, and construction of the tangent stiffness matrix for many implicit approaches. This report summaries progress to date on the software implementation of the peridynamic theory of solid mechanics. Discussion is focused on parallel implementation of the meshfree discretization scheme of Silling and Askari [33] in three dimensions, although much of the discussion applies to computational peridynamics in general.

  11. Asynchronous Multiparty Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Geisler, Martin; Krøigaard, Mikkel

    2009-01-01

    guarantees termination if the adversary allows a preprocessing phase to terminate, in which no information is released. The communication complexity of this protocol is the same as that of a passively secure solution up to a constant factor. It is secure against an adaptive and active adversary corrupting...... less than n/3 players. We also present a software framework for implementation of asynchronous protocols called VIFF (Virtual Ideal Functionality Framework), which allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated...... multithreading. Benchmarking of a VIFF implementation of our protocol confirms that it is applicable to practical non-trivial secure computations....

  12. Computed radiography - our experience

    International Nuclear Information System (INIS)

    Williams, C.

    1997-01-01

    Computed Radiography (CR) is the digital acquisition of plain X-ray images using phosphor plate technology. This allows post- processing and transmission of images to remote locations. St. Vincent's Public Hospital in Melbourne has had the benefit of two separate CR systems which have been implemented over the past three years. CR is a significant advance in radiographic imaging and is evolving continuously. The last few years have been a period of change and development for all staff which has proved both challenging and rewarding. Further development is required before the system is implemented completely. (author)

  13. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  14. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  15. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  16. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  17. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  18. The computer graphics metafile

    CERN Document Server

    Henderson, LR; Shepherd, B; Arnold, D B

    1990-01-01

    The Computer Graphics Metafile deals with the Computer Graphics Metafile (CGM) standard and covers topics ranging from the structure and contents of a metafile to CGM functionality, metafile elements, and real-world applications of CGM. Binary Encoding, Character Encoding, application profiles, and implementations are also discussed. This book is comprised of 18 chapters divided into five sections and begins with an overview of the CGM standard and how it can meet some of the requirements for storage of graphical data within a graphics system or application environment. The reader is then intr

  19. Diamond turning machine controller implementation

    Energy Technology Data Exchange (ETDEWEB)

    Garrard, K.P.; Taylor, L.W.; Knight, B.F.; Fornaro, R.J.

    1988-12-01

    The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, the control computer hardware and software, are discussed in detail below.

  20. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  1. Computer Literacy: Teaching Computer Ethics.

    Science.gov (United States)

    Troutner, Joanne

    1986-01-01

    Suggests learning activities for teaching computer ethics in three areas: (1) equal access; (2) computer crime; and (3) privacy. Topics include computer time, advertising, class enrollments, copyright law, sabotage ("worms"), the Privacy Act of 1974 and the Freedom of Information Act of 1966. (JM)

  2. 心算教學活動實踐於小一數學課室之研究 Mental Computation Activity Implementation into First-Grade Mathematics Classes

    Directory of Open Access Journals (Sweden)

    楊德清 Der-Ching Yang

    2012-06-01

    Full Text Available 本研究採質性研究法探討國小一年級學生進行心算教學活動之成效、策略改變情形及教學活動的實施歷程。研究樣本為實驗組學生21 人,進行二位數加減一位數的心算策略教學。對照組學生16人,依照教科書規劃的方式進行二位數加減一位數的教學,兩班各進行12 節課教學。研究結果顯示:教學後實驗組學生在心算之表現顯著優於對照組學生。同時,結果亦顯示教學後實驗組學生能夠發展多元之解題策略,如分離策略、集合策略,以及整體策略等。相反地,對照組學生在教學前、後,所使用之策略以數數策略、圖像與直式心像為主,改變較少,且少有心算策略的使用。教學過程中,藉由高分組學生的回答帶動中分組與低分組學生的思考,同時學生對於能夠上臺分享自己的策略感到興奮。最後根據研究結果,針對心算教學活動融入一年級數學課程及未來研究提出建議。 This study employs a qualitative approach to investigate the effect of mental computation activities integrated into first-grade mathematics classes. The mental computation activities, including 2-digit addition and subtraction problems, were used in an experimental group comprising 21 students. The control group comprised 16 students who were following textbook activities, including 2-digit addition and subtraction problems. The teaching intervention lasted for 12 periods for both groups. The results show that students in the experimental group experienced improved performance for mental computation than students in the control group following intervention. Additionally, data indicate that students in the experimental group can develop and use multiple mental strategies, such as separation, aggregation, and holistic strategies following intervention. Conversely, students in the control group preferred using counting and pictorial

  3. Advances in photonic reservoir computing

    Directory of Open Access Journals (Sweden)

    Van der Sande Guy

    2017-05-01

    Full Text Available We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  4. Advances in photonic reservoir computing

    Science.gov (United States)

    Van der Sande, Guy; Brunner, Daniel; Soriano, Miguel C.

    2017-05-01

    We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir's complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  5. A SURVEY ON UBIQUITOUS COMPUTING

    Directory of Open Access Journals (Sweden)

    Vishal Meshram

    2016-01-01

    Full Text Available This work presents a survey of ubiquitous computing research which is the emerging domain that implements communication technologies into day-to-day life activities. This research paper provides a classification of the research areas on the ubiquitous computing paradigm. In this paper, we present common architecture principles of ubiquitous systems and analyze important aspects in context-aware ubiquitous systems. In addition, this research work presents a novel architecture of ubiquitous computing system and a survey of sensors needed for applications in ubiquitous computing. The goals of this research work are three-fold: i serve as a guideline for researchers who are new to ubiquitous computing and want to contribute to this research area, ii provide a novel system architecture for ubiquitous computing system, and iii provides further research directions required into quality-of-service assurance of ubiquitous computing.

  6. Energy Consumption in Cloud Computing Data Centers

    OpenAIRE

    Uchechukwu Awada; Keqiu Li; Yanming Shen

    2014-01-01

    The implementation of cloud computing has attracted computing as a utility and enables penetrative applications from scientific, consumer and business domains. However, this implementation faces tremendous energy consumption, carbon dioxide emission and associated costs concerns. With energy consumption becoming key issue for the operation and maintenance of cloud datacenters, cloud computing providers are becoming profoundly concerned.  In this paper, we present formulations and solutions fo...

  7. Implementation of a computational system at the Center for Nuclear Technology Development, for systematization the application of the FMEA - Failure Mode and Effects Analysis, for identification of dangerous and developed risks evaluation

    International Nuclear Information System (INIS)

    Correa, Danyel Pontelo; Vasconcelos, Vanderley de

    2009-01-01

    The regulatory bodies request risks evaluations for nuclear and radioactive licensing purposes. In Brazil those evaluations are contained by the safety analysis reports requested by the Brazilian Nuclear Energy Commission (CNEN), and risk analysis studies requested by the environment organisms. A risk evaluation includes the identification of the risks and the accident sequence which can occur, and the estimation of the frequency and his undesirable effects on the industrial installations, the public, and the environment. The identification and the risk analysis are particularly important for the implementation of a health, environment and safety integrated management according to the regulation instruments ISO 14001, BS 8800 and OHSAS 18001. The utilization of the risk identification techniques and the risk analysis is performed at the non nuclear industry, in a non standard form by the various sectors of an enterprise, diminishing the effectiveness of the recommended actions based on risk indexes. However, for the nuclear licensing, the CNEN request through their regulatory instruments and standard formats, that the risks, the failure mechanisms and detection be identified, which can allow the preventive and mitigate actions. This paper proposes the utilization of the FMEA (Failure Mode and Effects Analysis) technique in the licensing process. It was implemented a software through the Excel program, using the Visual Basic for Applications program which allows the automation and the standardization of FMEA studies as well

  8. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  9. Numerical computations with GPUs

    CERN Document Server

    Kindratenko, Volodymyr

    2014-01-01

    This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to

  10. Opportunities for computer abuse

    DEFF Research Database (Denmark)

    Willison, Robert Andrew; Backhouse, James

    2005-01-01

    Systems risk refers to the likelihood that an IS is inadequately guarded against certain types of damage or loss. While risks are posed by acts of God, hackers and viruses, consideration should also be given to the `insider' threat of dishonest employees, intent on undertaking some form of comput...... on a number of criminological theories, it is believed the model may help inform managers about local threats and, by so doing, enhance safeguard implementation....

  11. 76 FR 1578 - Approval and Promulgation of Implementation Plans; New Mexico; Federal Implementation Plan for...

    Science.gov (United States)

    2011-01-11

    ... College Boulevard, Computer Science Building, Room 7103, Farmington, New Mexico 87402, (505) 326-3311. The... AGENCY 40 CFR Part 52 Approval and Promulgation of Implementation Plans; New Mexico; Federal... State Implementation Plan (SIP) revision submitted by the State of New Mexico and promulgate a Federal...

  12. Computational statistics handbook with MATLAB

    CERN Document Server

    Martinez, Wendy L

    2002-01-01

    Approaching computational statistics through its theoretical aspects can be daunting. Often intimidated or distracted by the theory, researchers and students can lose sight of the actual goals and applications of the subject. What they need are its key concepts, an understanding of its methods, experience with its implementation, and practice with computational software.Focusing on the computational aspects of statistics rather than the theoretical, Computational Statistics Handbook with MATLAB uses a down-to-earth approach that makes statistics accessible to a wide range of users. The authors

  13. A Call for Computational Thinking in Undergraduate Psychology

    Science.gov (United States)

    Anderson, Nicole D.

    2016-01-01

    Computational thinking is an approach to problem solving that is typically employed by computer programmers. The advantage of this approach is that solutions can be generated through algorithms that can be implemented as computer code. Although computational thinking has historically been a skill that is exclusively taught within computer science,…

  14. Computing in Qualitative Analysis: A Healthy Development?

    Science.gov (United States)

    Richards, Lyn; Richards, Tom

    1991-01-01

    Discusses the potential impact of computers in qualitative health research. Describes the original goals, design, and implementation of NUDIST, a qualitative computing software. Argues for evaluation of the impact of computer techniques and for an opening of debate among program developers and users to address the purposes and power of computing…

  15. Quantum Computation and Quantum Spin Dynamics

    NARCIS (Netherlands)

    Raedt, Hans De; Michielsen, Kristel; Hams, Anthony; Miyashita, Seiji; Saito, Keiji

    2001-01-01

    We analyze the stability of quantum computations on physically realizable quantum computers by simulating quantum spin models representing quantum computer hardware. Examples of logically identical implementations of the controlled-NOT operation are used to demonstrate that the results of a quantum

  16. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  17. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  18. Computed Tomography

    Science.gov (United States)

    Castellano, Isabel; Geleijns, Jacob

    After its clinical introduction in 1973, computed tomography developed from an x-ray modality for axial imaging in neuroradiology into a versatile three dimensional imaging modality for a wide range of applications in for example oncology, vascular radiology, cardiology, traumatology and even in interventional radiology. Computed tomography is applied for diagnosis, follow-up studies and screening of healthy subpopulations with specific risk factors. This chapter provides a general introduction in computed tomography, covering a short history of computed tomography, technology, image quality, dosimetry, room shielding, quality control and quality criteria.

  19. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  20. Implementation of the design attendent by computers (CAD) for the location of structures of power transmission lines; Implementacion del diseno asistido por computadora para la localizacion de estructuras de lineas de transmision

    Energy Technology Data Exchange (ETDEWEB)

    Vega Ortiz, Miguel; Gutierrez Arriola, Gustavo [Instituto de Investigaciones Electricas, Temixco, Morelos (Mexico)

    2000-07-01

    In order that the tools of CAD (Design Attended by Computer) that are offered in the market are really useful, they must combine the criteria and experiences of the expert designers with the specifications and practices established in the electrical company. This includes, from the introduction to the information system of the available input data and its design criteria, to obtaining the required output information. In the present work the methodology developed by the Instituto de Investigaciones Electricas (IIE) in the design of power transmission lines that integrates the Comision Federal de Electricidad (CFE) requirements in the design of its transmission power lines is an advanced computer tool that results in obtaining better designs. Some of the most important aspects are the reduction of the used working time, the cost of the designed line, its reliability, the flexibility in the information handling and the quality of presentation. [Spanish] Para que las herramientas de diseno asistido por computadora que se ofrecen en el mercado sean realmente utiles deben conjuntar los criterios y experiencias de los disenadores expertos con las especificaciones y practicas establecidas en la empresa electrica. Esto incluye desde la introduccion al sistema de la informacion de datos de entrada de la que se dispone y de sus criterios de diseno hasta la obtencion de la informacion de salida que se requiere. En el presente trabajo se resume la metodologia desarrollada por el Instituto de Investigaciones Electricas (IIE) en el diseno de lineas de transmision, que integra los requerimientos de la Comision Federal de Electricidad (CFE) en el diseno de sus lineas de transmision en una herramienta de computo avanzada y que redunda en la obtencion de mejores disenos. Algunos de los aspectos mas importantes son la reduccion del tiempo de trabajo empleado, el costo de la linea disenada, su confiabilidad, la flexibilidad en el manejo de informacion y la calidad de presentacion.

  1. Teaching Computer Organization and Architecture Using Simulation and FPGA Applications

    OpenAIRE

    D. K.M. Al-Aubidy

    2007-01-01

    This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemente...

  2. Security in Computer Applications

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture addresses the following question: how to create secure software? The lecture starts with a definition of computer security and an explanation of why it is so difficult to achieve. It then introduces the main security principles (like least-privilege, or defense-in-depth) and discusses security in different phases of the software development cycle. The emphasis is put on the implementation part: most common pitfalls and security bugs are listed, followed by advice on best practice for security development. The last part of the lecture covers some miscellaneous issues like the use of cryptography, rules for networking applications, and social engineering threats. This lecture was first given on Thursd...

  3. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  4. Computational Deception

    NARCIS (Netherlands)

    Nijholt, Antinus; Acosta, P.S.; Cravo, P.

    2010-01-01

    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behaviour, and our

  5. Grid Computing

    Science.gov (United States)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  6. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  7. Quantum Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... decade in computing. In this article we define cloud computing, various services available on the cloud infrastructure, and the different types of cloud. We then discuss the technological trends which have led to its emergence, its advantages and disadvan- tages, and the applications which are appropriate ...

  9. Computer Insecurity.

    Science.gov (United States)

    Wilson, David L.

    1994-01-01

    College administrators recently appealed to students and faculty to change their computer passwords after security experts announced that tens of thousands had been stolen by computer hackers. Federal officials are investigating. Such attacks are not uncommon, but the most effective solutions are either inconvenient or cumbersome. (MSE)

  10. Quantum Computing

    Indian Academy of Sciences (India)

    In the first part of this article, we had looked at how quantum physics can be harnessed to make the building blocks of a quantum computer. In this concluding part, we look at algorithms which can exploit the power of this computational device, and some practical difficulties in building such a device. Quantum Algorithms.

  11. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... Thus the availability of computing as a utility which allows organizations to pay service providers for what they use and eliminates the need to budget huge amounts to buy and maintain large computing infrastructure is a welcome development. Amazon, an e-commerce company, started operations in 1995.

  12. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    this understanding could entail in terms of developing new expressional appearances of computational technology, new ways of working with it, and new technological possibilities. The investigations are carried out in relation to, or as part of three experiments with computers and materials (PLANKS, Copper...

  13. BUC implementation in Slovakia

    International Nuclear Information System (INIS)

    Chrapciak, V.; Vaclav, J.

    2009-01-01

    Improved calculation methods allow one to take credit for the reactivity reduction associated with fuel burnup. This means reducing the analysis conservatism while maintaining an adequate criticality safety margin. Application of burnup credit requires knowledge of the reactivity state of the irradiated fuel for which application of burnup credit is taken. The isotopic inventory and reactivity has to be calculated with validated codes. We use in Slovakia Gd2 fuel with maximal enrichment of fuel pins 4.4%. Our transport and storage basket KZ-48 with boron steel is licensed for fresh fuel with enrichment 4.4%. In near future (2011 or 2012) we will use a new fuel with maximal enrichment of fuel pins 4.9%. For this fuel we plan to use existing KZ-48 with application of burnup credit application. In cooperation with Slovak Nuclear Regulatory Authority we have started several years ago process of application of burnup credit implementation in Slovakia for WWER-440 reactors. We have already prepared methodology according IAEA methodology. We have validated computational systems (SCALE 5.1 already, SCALE 6 in progress). Slovak Nuclear Regulatory Authority will prepare regulation about application of burnup credit application in Slovakia. Last item is preparation of safety reports (for transport and storage) for the new fuel with average enrichment 4.87% in basket KZ-48 with application of burnup credit application. (Authors)

  14. BUC implementation in Slovakia

    International Nuclear Information System (INIS)

    Chrapciak, V.; Vaclav, J.

    2009-01-01

    Improved calculation methods allow one to take credit for the reactivity reduction associated with fuel burnup. This means reducing the analysis conservatism while maintaining an adequate criticality safety margin. Application of burnup credit (BUC) requires knowledge of the reactivity state of the irradiated fuel for which BUC is taken. The isotopic inventory and reactivity has to be calculated with validated codes. We use in Slovakia Gd 2 fuel with maximal enrichment of fuel pins 4.4%. Our transport and storage basket KZ-48 with boron steel is licensed for fresh fuel with enrichment 4.4%. In near future (2011 or 2012) we will use a new fuel with maximal enrichment of fuel pins 4.9%. For this fuel we plan to use existing KZ-48 with BUC application. In cooperation with Slovak Nuclear Regulatory Authority (UJD) we have started several years ago process of BUC implementation in Slovakia for VVER-440 reactors. We have already prepared methodology according IAEA methodology. We have validated computational systems (SCALE 5.1 already, SCALE 6 in progress). UJD will prepare regulation about BUC application in Slovakia. Last item is preparation of safety reports (for transport and storage) for the new fuel with average enrichment 4.87% in basket KZ-48 with BUC application.

  15. Future of fusion implementation

    International Nuclear Information System (INIS)

    Beardsworth, E.; Powell, J.R.

    1978-01-01

    For fusion to become available for commercial use in the 21st century, R and D must be undertaken now. But it is hard to justify these expenditures with a cost/benefit oriented assessment methodology, because of both the time-frame and the uncertainty of the future benefits. Focusing on the factors most relevant for current consideration of fusion's commercial prospects, i.e., consumption levels and the outcomes for fission, solar, and coal, many possible futures of the US energy system are posited and analyzed under various assumptions about costs. The Reference Energy System approach was modified to establish both an appropriate degree of detail and explicit time dependence, and a computer code used to organize the relevant data and to perform calculations of system cost (annual and discounted present value), resource use, and residuals that are implied by the consumptions levels and technology mix in each scenario. Not unreasonable scenarios indicate benefits in the form of direct cost savings, which may well exceed R and D costs, which could be attributed to the implementation of fusion

  16. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  17. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  18. Implementation of Steiner point of fuzzy set.

    Science.gov (United States)

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  19. Una implementación computacional de un modelo de atención visual Bottom-up aplicado a escenas naturales/A Computational Implementation of a Bottom-up Visual Attention Model Applied to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Juan F. Ramírez Villegas

    2011-12-01

    Full Text Available El modelo de atención visual bottom-up propuesto por Itti et al., 2000 [1], ha sido un modelo popular en tanto exhibe cierta evidencia neurobiológica de la visión en primates. Este trabajo complementa el modelo computacional de este fenómeno desde la dinámica realista de una red neuronal. Asimismo, esta aproximación se basa en la existencia de mapas topográficos que representan la prominencia de los objetos del campo visual para la formación de una representación general (mapa de prominencia, esta representación es la entrada de una red neuronal dinámica con interacciones locales y globales de colaboración y competencia que convergen sobre las principales particularidades (objetos de la escena.The bottom-up visual attention model proposed by Itti et al. 2000 [1], has been a popular model since it exhibits certain neurobiological evidence of primates’ vision. This work complements the computational model of this phenomenon using a neural network with realistic dynamics. This approximation is based on several topographical maps representing the objects saliency that construct a general representation (saliency map, which is the input for a dynamic neural network, whose local and global collaborative and competitive interactions converge to the main particularities (objects presented by the visual scene as well.

  20. Computational fluid dynamics on a massively parallel computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.