WorldWideScience

Sample records for computing fy10-11 implementation

  1. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  2. Quantum computers: Definition and implementations

    International Nuclear Information System (INIS)

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  3. Visual implementation of computer communication

    OpenAIRE

    Gunnarsson, Tobias; Johansson, Hans

    2010-01-01

    Communication is a fundamental part of life and during the 20th century several new ways for communication has been developed and created. From the first telegraph which made it possible to send messages over long distances to radio communication and the telephone. In the last decades, computer to computer communication at high speed has become increasingly important, and so also the need for understanding computer communication. Since data communication today works in speeds that are so high...

  4. Implementation of an embedded computer

    OpenAIRE

    Pikl, Bojan

    2011-01-01

    The goal of this thesis is to describe a production of an embedded computer. The thesis describes development and production of an embedded computer for the medical diode laser DL30 that is being developed in Robomed d.o.o.. The first part of the thesis describes the choice of hardware devices. I mostly describe the technologies that one can buy on the market. Moreover for every part of the computer installed and developed there is an argument why we selected that exact part. The second part ...

  5. Implementation of cloud computing in higher education

    Science.gov (United States)

    Asniar; Budiawan, R.

    2016-04-01

    Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.

  6. Fluctuating hyperfine interactions: computational implementation

    International Nuclear Information System (INIS)

    Zacate, M. O.; Evenson, W. E.

    2010-01-01

    A library of computational routines has been created to assist in the analysis of stochastic models of hyperfine interactions. We call this library the stochastic hyperfine interactions modeling library (SHIML). It provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental hyperfine interaction measurements can be calculated. Example model calculations are included in the SHIML package to illustrate its use and to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A 22 can be neglected.

  7. Implementing and developing cloud computing applications

    CERN Document Server

    Sarna, David E Y

    2010-01-01

    From small start-ups to major corporations, companies of all sizes have embraced cloud computing for the scalability, reliability, and cost benefits it can provide. It has even been said that cloud computing may have a greater effect on our lives than the PC and dot-com revolutions combined.Filled with comparative charts and decision trees, Implementing and Developing Cloud Computing Applications explains exactly what it takes to build robust and highly scalable cloud computing applications in any organization. Covering the major commercial offerings available, it provides authoritative guidan

  8. FPGA Implementation of Computer Vision Algorithm

    OpenAIRE

    Zhou, Zhonghua

    2014-01-01

    Computer vision algorithms, which play an significant role in vision processing, is widely applied in many aspects such as geology survey, traffic management and medical care, etc.. Most of the situations require the process to be real-timed, in other words, as fast as possible. Field Programmable Gate Arrays (FPGAs) have a advantage of parallelism fabric in programming, comparing to the serial communications of CPUs, which makes FPGA a perfect platform for implementing vision algorithms. The...

  9. Computation and parallel implementation for early vision

    Science.gov (United States)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  10. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  11. Quantum computing implementations with neutral particles

    DEFF Research Database (Denmark)

    Negretti, Antonio; Treutlein, Philipp; Calarco, Tommaso

    2011-01-01

    We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our discu...... optimal control theory might be a powerful tool to enhance the speed up of the gate operations as well as to achieve high fidelities required for fault tolerant quantum computation.......We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our...... discussion mainly on collisional quantum gates, which are best suited for atom-chip-like devices, as well as on gate proposals conceived for optical lattices. Additionally, we analyze schemes both for cold atoms confined in optical cavities and hybrid approaches to entanglement generation, and we show how...

  12. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  13. Methodology of Implementation of Computer Forensics

    OpenAIRE

    Gelev, Saso; Golubovski, Roman; Hristov, Risto; Nikolov, Elenior

    2013-01-01

    Compared to other sciences, computer forensics (digital forensics) is a relatively young discipline. It was established in 1999 and it has been an irreplaceable tool in sanctioning cybercrime ever since. Good knowledge of computer forensics can be really helpful in uncovering a committed crime. Not adhering to the methodology of computer forensics, however, makes the obtained evidence invalid/irrelevant and as such it cannot be used in legal proceedings. This paper is to explain the methodolo...

  14. Implementing ASPEN on the CRAY computer

    International Nuclear Information System (INIS)

    Duerre, K.H.; Bumb, A.C.

    1981-01-01

    This paper describes our experience in converting the ASPEN program for use on our CRAY computers at the Los Alamos National Laboratory. The CRAY computer is two-to-five times faster than a CDC-7600 for scalar operations, is equipped with up to two million words of high-speed storage, and has vector processing capability. Thus, the CRAY is a natural candidate for programs that are the size and complexity of ASPEN. Our approach to converting ASPEN and the conversion problems are discussed, including our plans for optimizing the program. Comparisons of run times for test problems between the CRAY and IBM 370 computer versions are presented

  15. Efficient Computer Implementations of Fast Fourier Transforms.

    Science.gov (United States)

    1980-12-01

    fit in computer? Yes, continue (9) Determine fastest algorithm between WFTA and PFA from Table 4.6. For N=420, WFTA PFA Mult 1296 2528 Add 11352 10956...real adds = 24tN/4 + 2(3tN/4) = 15tN/2 (G.8) 260 All odd prime C<ictors ciual to or (,rater than 5 iso the general transform section. Based on the

  16. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  17. Software Defined Radio Datalink Implementation Using PC-Type Computers

    National Research Council Canada - National Science Library

    Zafeiropoulos, Georgios

    2003-01-01

    The objective of this thesis was to examine the feasibility of implementation and the performance of a Software Defined Radio datalink, using a common PC type host computer and a high level programming language...

  18. Minimal computational-space implementation of multiround quantum protocols

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-01-01

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  19. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  20. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  1. Implementation of DFT application on ternary optical computer

    Science.gov (United States)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  2. Potential implementation of reservoir computing models based on magnetic skyrmions

    Science.gov (United States)

    Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin

    2018-05-01

    Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.

  3. Implementation of computer security at nuclear facilities in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Lochthofen, Andre; Sommer, Dagmar [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2013-07-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  4. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  5. STAGE2 macroprocessor. [Implemented on ICL 1905 computer

    Energy Technology Data Exchange (ETDEWEB)

    Zimanyi, M

    1975-01-01

    STAGE2 is a general purpose language-independent macroprocessor, developed by W. M. Waite, mainly for implementing portable software. The macroprocessor, itself a piece of highly portable software, was implemented on the ICL 1905 computer by bootstrapping. This report can serve as a user's manual of STAGE2. Some examples of the application of the processor for language extension, language translation, and text generation are given. (auth)

  6. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Abstraction to Implementation: A Two Stage Introduction to Computer Science.

    Science.gov (United States)

    Wolz, Ursula; Conjura, Edward

    A three-semester core curriculum for undergraduate computer science is proposed and described. Both functional and imperative programming styles are taught. The curriculum particularly addresses the problem of effectively presenting both abstraction and implementation. Two courses in the first semester emphasize abstraction. The next courses…

  8. Learning Computer Programming: Implementing a Fractal in a Turing Machine

    Science.gov (United States)

    Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.

    2010-01-01

    It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…

  9. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  10. A computer architecture for the implementation of SDL

    Energy Technology Data Exchange (ETDEWEB)

    Crutcher, L A

    1989-01-01

    Finite State Machines (FSMs) are a part of well-established automata theory. The FSM model is useful in all stages of system design, from abstract specification to implementation in hardware. The FSM model has been studied as a technique in software design, and the implementation of this type of software considered. The Specification and Description Language (SDL) has been considered in detail as an example of this approach. The complexity of systems designed using SDL warrants their implementation through a programmed computer. A benchmark for the implementation of SDL has been established and the performance of SDL on three particular computer architectures investigated. Performance is judged according to this benchmark and also the ease of implementation, which is related to the confidence of a correct implementation. The implementation on 68000s and transputers is considered as representative of established and state-of-the-art microprocessors respectively. A third architecture that uses a processor that has been proposed specifically for the implementation of SDL is considered as a high-level custom architecture. Analysis and measurements of the benchmark on each architecture indicates that the execution time of SDL decreases by an order of magnitude from the 68000 to the transputer to the custom architecture. The ease of implementation is also greater when the execution time is reduced. A study of some real applications of SDL indicates that the benchmark figures are reflected in user-oriented measures of performance such as data throughput and response time. A high-level architecture such as the one proposed here for SDL can provide benefits in terms of execution time and correctness.

  11. The implementation of AI technologies in computer wargames

    Science.gov (United States)

    Tiller, John A.

    2004-08-01

    Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.

  12. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  13. Cluster implementation for parallel computation within MATLAB software environment

    International Nuclear Information System (INIS)

    Santana, Antonio O. de; Dantas, Carlos C.; Charamba, Luiz G. da R.; Souza Neto, Wilson F. de; Melo, Silvio B. Melo; Lima, Emerson A. de O.

    2013-01-01

    A cluster for parallel computation with MATLAB software the COCGT - Cluster for Optimizing Computing in Gamma ray Transmission methods, is implemented. The implementation correspond to creation of a local net of computers, facilities and configurations of software, as well as the accomplishment of cluster tests for determine and optimizing of performance in the data processing. The COCGT implementation was required by data computation from gamma transmission measurements applied to fluid dynamic and tomography reconstruction in a FCC-Fluid Catalytic Cracking cold pilot unity, and simulation data as well. As an initial test the determination of SVD - Singular Values Decomposition - of random matrix with dimension (n , n), n=1000, using the Girco's law modified, revealed that COCGT was faster in comparison to the literature [1] cluster, which is similar and operates at the same conditions. Solution of a system of linear equations provided a new test for the COCGT performance by processing a square matrix with n=10000, computing time was 27 s and for square matrix with n=12000, computation time was 45 s. For determination of the cluster behavior in relation to 'parfor' (parallel for-loop) and 'spmd' (single program multiple data), two codes were used containing those two commands and the same problem: determination of SVD of a square matrix with n= 1000. The execution of codes by means of COCGT proved: 1) for the code with 'parfor', the performance improved with the labs number from 1 to 8 labs; 2) for the code 'spmd', just 1 lab (core) was enough to process and give results in less than 1 s. In similar situation, with the difference that now the SVD will be determined from square matrix with n1500, for code with 'parfor', and n=7000, for code with 'spmd'. That results take to conclusions: 1) for the code with 'parfor', the behavior was the same already described above; 2) for code with 'spmd', the same besides having produced a larger performance, it supports a

  14. Computational Toxicology as Implemented by the US EPA ...

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  15. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  16. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  17. A scalable implementation of RI-SCF on parallel computers

    International Nuclear Information System (INIS)

    Fruechtl, H.A.; Kendall, R.A.; Harrison, R.J.

    1996-01-01

    In order to avoid the integral bottleneck of conventional SCF calculations, the Resolution of the Identity (RI) method is used to obtain an approximate solution to the Hartree-Fock equations. In this approximation only three-center integrals are needed to build the Fock matrix. It has been implemented as part of the NWChem package of portable and scalable ab initio programs for parallel computers. Utilizing the V-approximation, both the Coulomb and exchange contribution to the Fock matrix can be calculated from a transformed set of three-center integrals which have to be precalculated and stored. A distributed in-core method as well as a disk based implementation have been programmed. Details of the implementation as well as the parallel programming tools used are described. We also give results and timings from benchmark calculations

  18. Implementing and Operating Computer Graphics in the Contemporary Chemistry Education

    Directory of Open Access Journals (Sweden)

    Olga Popovska

    2017-11-01

    Full Text Available Technology plays a crucial role in modern teaching, providing both, educators and students fundamental theoretical insights as well as supporting the interpretation of experimental data. In the long term it gives students a clear stake in their learning processes. Advancing in education furthermore largely depends on providing valuable experiences and tools throughout digital and computer literacy. Here and after, the computer’s benefit makes no exception in the chemistry as a science. The major part of computer revolutionizing in the chemistry laboratory is with the use of images, diagrams, molecular models, graphs and specialized chemistry programs. In the sense of this, the teacher provides more interactive classes and numerous dynamic teaching methods along with advanced technology. All things considered, the aim of this article is to implement interactive teaching methods of chemistry subjects using chemistry computer graphics. A group of students (n = 30 at the age of 18–20 were testing using methods such as brainstorming, demonstration, working in pairs, and writing laboratory notebooks. The results showed that demonstration is the most acceptable interactive method (95%. This article is expected to be of high value to teachers and researchers of chemistry, implementing interactive methods, and operating computer graphics.

  19. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  20. The Implementation of Computer Data Processing Software for EAST NBI

    International Nuclear Information System (INIS)

    Zhang Xiaodan; Hu Chundong; Sheng Peng; Zhao Yuanzhe; Wu Deyun; Cui Qinglong

    2014-01-01

    One of the most important project missions of neutral beam injectors is the implementation of 100 s neutral beam injection (NBI) with high power energy to the plasma of the EAST superconducting tokamak. Correspondingly, it's necessary to construct a high-speed and reliable computer data processing system for processing experimental data, such as data acquisition, data compression and storage, data decompression and query, as well as data analysis. The implementation of computer data processing application software (CDPS) for EAST NBI is presented in this paper in terms of its functional structure and system realization. The set of software is programmed in C language and runs on Linux operating system based on TCP network protocol and multi-threading technology. The hardware mainly includes industrial control computer (IPC), data server, PXI DAQ cards and so on. Now this software has been applied to EAST NBI system, and experimental results show that the CDPS can serve EAST NBI very well. (fusion engineering)

  1. Implementation of Computer Assisted Test Selection System in Local Governments

    Directory of Open Access Journals (Sweden)

    Abdul Azis Basri

    2016-05-01

    Full Text Available As an evaluative way of selection of civil servant system in all government areas, Computer Assisted Test selection system was started to apply in 2013. In phase of implementation for first time in all areas in 2014, this system selection had trouble in several areas, such as registration procedure and passing grade. The main objective of this essay was to describe implementation of new selection system for civil servants in the local governments and to seek level of effectiveness of this selection system. This essay used combination of study literature and field survey which data collection was made by interviews, observations, and documentations from various sources, and to analyze the collected data, this essay used reduction, display data and verification for made the conclusion. The result of this essay showed, despite there a few parts that be problem of this system such as in the registration phase but almost all phases of implementation of CAT selection system in local government areas can be said was working clearly likes in preparation, implementation and result processing phase. And also this system was fulfilled two of three criterias of effectiveness for selection system, they were accuracy and trusty. Therefore, this selection system can be said as an effective way to select new civil servant. As suggestion, local governments have to make prime preparation in all phases of test and make a good feedback as evaluation mechanism and together with central government to seek, fix and improve infrastructures as supporting tool and competency of local residents.

  2. Precision Medicine and PET/Computed Tomography: Challenges and Implementation.

    Science.gov (United States)

    Subramaniam, Rathan M

    2017-01-01

    Precision Medicine is about selecting the right therapy for the right patient, at the right time, specific to the molecular targets expressed by disease or tumors, in the context of patient's environment and lifestyle. Some of the challenges for delivery of precision medicine in oncology include biomarkers for patient selection for enrichment-precision diagnostics, mapping out tumor heterogeneity that contributes to therapy failures, and early therapy assessment to identify resistance to therapies. PET/computed tomography offers solutions in these important areas of challenges and facilitates implementation of precision medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  4. Visualising elastic anisotropy: theoretical background and computational implementation

    Science.gov (United States)

    Nordmann, J.; Aßmus, M.; Altenbach, H.

    2018-02-01

    In this article, we present the technical realisation for visualisations of characteristic parameters of the fourth-order elasticity tensor, which is classified by three-dimensional symmetry groups. Hereby, expressions for spatial representations of uc(Young)'s modulus and bulk modulus as well as plane representations of shear modulus and uc(Poisson)'s ratio are derived and transferred into a comprehensible form to computer algebra systems. Additionally, we present approaches for spatial representations of both latter parameters. These three- and two-dimensional representations are implemented into the software MATrix LABoratory. Exemplary representations of characteristic materials complete the present treatise.

  5. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  6. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  7. The Ability of implementing Cloud Computing in Higher Education - KRG

    Directory of Open Access Journals (Sweden)

    Zanyar Ali Ahmed

    2017-06-01

    Full Text Available Cloud computing is a new technology. CC is an online service can store and retrieve information, without the requirement for physical access to the files on hard drives. The information is available on a system, server where it can be accessed by clients when it’s needed. Lack of the ICT infrastructure of universities of the Kurdistan Regional Government (KRG can use  this new technology, because of economical advantages, enhanced data managements, better maintenance, high performance, improve availability and accessibility therefore achieving an easy maintenance  of organizational  institutes. The aim of this research is to find the ability and possibility to implement the cloud computing in higher education of the KRG. This research will help the universities to start establishing a cloud computing in their services. A survey has been conducted to evaluate the CC services that have been applied to KRG universities have by using cloud computing services. The results showed that the most of KRG universities are using SaaS. MHE-KRG universities and institutions are confronting many challenges and concerns in term of security, user privacy, lack of integration with current systems, and data and documents ownership.

  8. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  9. COMPUTATION FORMAT computer codes X4TOC4 and PLOTC4. Implementing and Testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskette containing the COMPUTATION FORMAT codes X4TOC4 and PLOTC4 by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a single diskette. (author)

  10. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  11. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  12. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Bly, Aaron

    2017-06-01

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based procedure in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker

  13. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  14. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  15. Three-dimensional pseudo-random number generator for implementing in hybrid computer systems

    International Nuclear Information System (INIS)

    Ivanov, M.A.; Vasil'ev, N.P.; Voronin, A.V.; Kravtsov, M.Yu.; Maksutov, A.A.; Spiridonov, A.A.; Khudyakova, V.I.; Chugunkov, I.V.

    2012-01-01

    The algorithm for generating pseudo-random numbers oriented to implementation by using hybrid computer systems is considered. The proposed solution is characterized by a high degree of parallel computing [ru

  16. Computer arithmetic and validity theory, implementation, and applications

    CERN Document Server

    Kulisch, Ulrich

    2013-01-01

    This is the revised and extended second edition of the successful basic book on computer arithmetic. It is consistent with the newest recent standard developments in the field. The book shows how the arithmetic capability of the computer can be enhanced. The work is motivated by the desire and the need to improve the accuracy of numerical computing and to control the quality of the computed results (validity). The accuracy requirements for the elementary floating-point operations are extended to the customary product spaces of computations including interval spaces. The mathematical properties

  17. Design and implementation of distributed spatial computing node based on WPS

    International Nuclear Information System (INIS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-01-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed

  18. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  19. An Exploratory Study of the Implementation of Computer Technology in an American Islamic Private School

    Science.gov (United States)

    Saleem, Mohammed M.

    2009-01-01

    This exploratory study of the implementation of computer technology in an American Islamic private school leveraged the case study methodology and ethnographic methods informed by symbolic interactionism and the framework of the Muslim Diaspora. The study focused on describing the implementation of computer technology and identifying the…

  20. Implementation of Keystroke Dynamics for Authentication in Computer Systems

    Directory of Open Access Journals (Sweden)

    S. V. Skuratov

    2010-06-01

    Full Text Available Implementation of keystroke dynamics in multifactor authentication systems is described in the article. Original access control system based on totality of matchers is presented. Testing results and useful recommendations are also adduced.

  1. Implementation of Cloud Computing into VoIP

    Directory of Open Access Journals (Sweden)

    Floriana GEREA

    2012-08-01

    Full Text Available This article defines Cloud Computing and highlights key concepts, the benefits of using virtualization, its weaknesses and ways of combining it with classical VoIP technologies applied to large scale businesses. The analysis takes into consideration management strategies and resources for better customer orientation and risk management all for sustaining the Service Level Agreement (SLA. An important issue in cloud computing can be security and for this reason there are several security solution presented.

  2. Secure cloud computing implementation study for Singapore military operations

    OpenAIRE

    Guoquan, Lai

    2016-01-01

    Approved for public release; distribution is unlimited Cloud computing benefits organizations in many ways. With characteristics such as resource pooling, broad network access, on-demand self-service, and rapid elasticity, an organization's overall IT management can be significantly reduced (in terms of labor, software, and hardware) and its work processes made more efficient. However, is cloud computing suitable for the Singapore Armed Forces (SAF)? How can the SAF migrate its traditional...

  3. Secure Cloud Computing Implementation Study For Singapore Military Operations

    Science.gov (United States)

    2016-09-01

    50 Figure 7. Basic Military Cloud Features Integrated into the OODA Loop Figure 8. Process...demand via the network” to cloud users [2]. International Business Machines (IBM) defines it as “the delivery of on-demand computing resources...to Statista [6], the public cloud computing market has shown continuous revenue growth in cloud services, beginning with a notable increase in 5

  4. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  5. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  6. Prolog as description and implementation language in computer science teaching

    DEFF Research Database (Denmark)

    Christiansen, Henning

    population with uneven mathematical backgrounds. % Definitional interpreters, compilers, and other models of computation are defined in a systematic way as Prolog programs, and as a result, formal descriptions become running prototypes that can be tested and modified by the students. These programs can......Prolog is a powerful pedagogical instrument for theoretical elements of computer science when used as combined description language and experimentation tool. A teaching methodology based on this principle has been developed and successfully applied in a context with a heterogeneous student...

  7. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  8. Implementation of Fog Computing for Reliable E-Health Applications

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Mihaylov, Mihail Rumenov

    2015-01-01

    tasks, such as storage and data signal processing to the edge of the network, thus decreasing the latency associated with performing those tasks within the cloud. The research scenario is an e-Health laboratory implementation where the real-time processing is performed by the home PC, while...

  9. Constraints in Teacher Training for Computer Assisted Language Testing Implementation

    Science.gov (United States)

    Garcia Laborda, Jesus; Litzler, Mary Frances

    2011-01-01

    Many ELT examinations have gone online in the last few years and a large number of educational institutions have also started considering the possibility of implementing their own tests. This paper deals with the training of a group of 24 ELT teachers in the Region of Valencia (Spain). In 2007, the Ministry of Education provided funds to determine…

  10. Implementation of a data and computer security application | Bibu ...

    African Journals Online (AJOL)

    If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs. Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link ...

  11. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  12. Computer implementation of an elastic-plastic concrete relationship

    International Nuclear Information System (INIS)

    Murray, D.W.; Chitnuyanondh, L.; Wong, C.

    1979-01-01

    The purpose of this paper is to describe the difficulties that arose, and the strategies that were developed to overcome these difficulties, during the incorporation of a relatively complex elastic-plastic concrete constitutive relationship into an existing computer code for the analysis of axisymmetric loading acting on thin shells of revolution. The program had the capability of elastic-plastic analysis using a von-Mises yield curve prior to any modification by the writers. (orig.)

  13. Selection and implementation of a laboratory computer system.

    Science.gov (United States)

    Moritz, V A; McMaster, R; Dillon, T; Mayall, B

    1995-07-01

    The process of selection of a pathology computer system has become increasingly complex as there are an increasing number of facilities that must be provided and stringent performance requirements under heavy computing loads from both human users and machine inputs. Furthermore, the continuing advances in software and hardware technology provide more options and innovative new ways of tackling problems. These factors taken together pose a difficult and complex set of decisions and choices for the system analyst and designer. The selection process followed by the Microbiology Department at Heidelberg Repatriation Hospital included examination of existing systems, development of a functional specification followed by a formal tender process. The successful tenderer was then selected using predefined evaluation criteria. The successful tenderer was a software development company that developed and supplied a system based on a distributed network using a SUN computer as the main processor. The software was written using Informix running on the UNIX operating system. This represents one of the first microbiology systems developed using a commercial relational database and fourth generation language. The advantages of this approach are discussed.

  14. Implementation of burnup in FERM nodal computer code

    International Nuclear Information System (INIS)

    Yoriyaz, H.; Nakata, H.

    1986-01-01

    In this work a spatial burnup scheme and feedback effects has been implemented into the FERM [1] ('Finite Element Response Matrix') program. The spatially dependent neutronic parameters have been considered in three levels: zonewise calculation, assemblywise calculation and pointwise calculation. The results have been compared with the results obtained by CITATION [2] program and showed that the processing time in the FERM code has been hundred of times shorter and no significant difference has been observed in the assembly average power distribution. (Author) [pt

  15. Public policy and regulatory implications for the implementation of Opportunistic Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning; Henten, Anders

    2012-01-01

    Opportunistic Cloud Computing Services (OCCS) is a social network approach to the provisioning and management of cloud computing services for enterprises. This paper discusses how public policy and regulations will impact on OCCS implementation. We rely on documented publicly available government...... and corporate policies on the adoption of cloud computing services and deduce the impact of these policies on their adoption of opportunistic cloud computing services. We conclude that there are regulatory challenges on data protection that raises issues for cloud computing adoption in general; and the lack...... of a single globally accepted data protection standard poses some challenges for very successful implementation of OCCS for companies. However, the direction of current public and corporate policies on cloud computing make a good case for them to try out opportunistic cloud computing services....

  16. Export Controls: Implementation of the 1998 Legislative Mandate for High Performance Computers

    National Research Council Canada - National Science Library

    1999-01-01

    We found that most of the 938 proposed exports of high performance computers to civilian end users in countries of concern from February 3, 1998, when procedures implementing the 1998 authorization...

  17. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  18. “Future Directions”: m-government computer systems accessed via cloud computing – advantages and possible implementations

    OpenAIRE

    Daniela LIŢAN

    2015-01-01

    In recent years, the activities of companies and Public Administration had been automated and adapted to the current information system. Therefore, in this paper, I will present and exemplify the benefits of m-government computer systems development and implementation (which can be accessed from mobile devices and which are specific to the workflow of Public Administrations) starting from the “experience” of e-government systems implementation in the context of their access and usage through ...

  19. Quantum computation: algorithms and implementation in quantum dot devices

    Science.gov (United States)

    Gamble, John King

    In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques

  20. The Implementation and Use of Computers in Education in Brazil: Niteroi City/Rio de Janeiro

    Science.gov (United States)

    de Fatima D'Assumpcao Castro, Maria; Alves, Luiz Anastacio

    2007-01-01

    The introduction of computer technology has touched off an actual revolution for teaching and learning activities. In the present study, we investigated the impact of the implementation and use of computers in the public school system, from the elementary grades to high school, in Niteroi city, Rio de Janeiro (Brazil). This city, with a total…

  1. Implementing iRound: A Computer-Based Auditing Tool.

    Science.gov (United States)

    Brady, Darcie

    Many hospitals use rounding or auditing as a tool to help identify gaps and needs in quality and process performance. Some hospitals are also using rounding to help improve patient experience. It is known that purposeful rounding helps improve Hospital Consumer Assessment of Healthcare Providers and Systems scores by helping manage patient expectations, provide service recovery, and recognize quality caregivers. Rounding works when a standard method is used across the facility, where data are comparable and trustworthy. This facility had a pen-and-paper process in place that made data reporting difficult, created a silo culture between departments, and most audits and rounds were completed differently on each unit. It was recognized that this facility needed to standardize the rounding and auditing process. The tool created by the Advisory Board called iRound was chosen as the tool this facility would use for patient experience rounds as well as process and quality rounding. The success of the iRound tool in this facility depended on several factors that started many months before implementation to current everyday usage.

  2. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  3. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    OpenAIRE

    Ronnie Cheung; Calvin Wan

    2011-01-01

    We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer l...

  4. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  5. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  6. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  7. Talbot's method for the numerical inversion of Laplace transforms: an implementation for personal computers

    International Nuclear Information System (INIS)

    Garratt, T.J.

    1989-05-01

    Safety assessments of radioactive waste disposal require efficient computer models for the important processes. The present paper is based on an efficient computational technique which can be used to solve a wide variety of safety assessment models. It involves the numerical inversion of analytical solutions to the Laplace-transformed differential equations using a method proposed by Talbot. This method has been implemented on a personal computer in a user-friendly manner. The steps required to implement a particular transform and run the program are outlined. Four examples are described which illustrate the flexibility, accuracy and efficiency of the program. The improvements in computational efficiency described in this paper have application to the probabilistic safety assessment codes ESCORT and MASCOT which are currently under development. Also, it is hoped that the present work will form the basis of software for personal computers which could be used to demonstrate safety assessment procedures to a wide audience. (author)

  8. The Observation of Bahasa Indonesia Official Computer Terms Implementation in Scientific Publication

    Science.gov (United States)

    Gunawan, D.; Amalia, A.; Lydia, M. S.; Muthaqin, M. I.

    2018-03-01

    The government of the Republic of Indonesia had issued a regulation to substitute computer terms in foreign language that have been used earlier into official computer terms in Bahasa Indonesia. This regulation was stipulated in Presidential Decree No. 2 of 2001 concerning the introduction of official computer terms in Bahasa Indonesia (known as Senarai Padanan Istilah/SPI). After sixteen years, people of Indonesia, particularly for academics, should have implemented the official computer terms in their official publications. This observation is conducted to discover the implementation of official computer terms usage in scientific publications which are written in Bahasa Indonesia. The data source used in this observation are the publications by the academics, particularly in computer science field. The method used in the observation is divided into four stages. The first stage is metadata harvesting by using Open Archive Initiative - Protocol for Metadata Harvesting (OAI-PMH). Second, converting the harvested document (in pdf format) to plain text. The third stage is text-preprocessing as the preparation of string matching. Then the final stage is searching the official computer terms based on 629 SPI terms by using Boyer-Moore algorithm. We observed that there are 240,781 foreign computer terms in 1,156 scientific publications from six universities. This result shows that the foreign computer terms are still widely used by the academics.

  9. Patent law for computer scientists steps to protect computer-implemented inventions

    CERN Document Server

    Closa, Daniel; Giemsa, Falk; Machek, Jörg

    2010-01-01

    Written from over 70 years of experience, this overview explains patent laws across Europe, the US and Japan, and teaches readers how to think from a patent examiner's perspective. Over 10 detailed case studies are presented from different computer science applications.

  10. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    Directory of Open Access Journals (Sweden)

    Watthanai Pinthong

    2016-07-01

    Full Text Available Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  11. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    Science.gov (United States)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  12. Computer Assisted Implementation of the 1999 WHO/ISH Hypertension Guidelines

    Czech Academy of Sciences Publication Activity Database

    Peleška, Jan; Zvára Jr., Karel; Tomečková, Marie; Zvárová, Jana

    20 Suppl. 4, - (2002), s. 86 ISSN 0263-6352. [Scientific Meeting of the International Society of Hypertension /19./, Europeean Meeting on Hypertension /12./. 23.06.2002-27.06.2002, Prague] R&D Projects: GA MŠk LN00B107 Grant - others:MGT(XE) EC 4.FP Keywords : guidelines for hypertension * implementation * computer assisted implementation Subject RIV: BA - General Mathematics

  13. Computational implementation of the multi-mechanism deformation coupled fracture model for salt

    International Nuclear Information System (INIS)

    Koteras, J.R.; Munson, D.E.

    1996-01-01

    The Multi-Mechanism Deformation (M-D) model for creep in rock salt has been used in three-dimensional computations for the Waste Isolation Pilot Plant (WIPP), a potential waste, repository. These computational studies are relied upon to make key predictions about long-term behavior of the repository. Recently, the M-D model was extended to include creep-induced damage. The extended model, the Multi-Mechanism Deformation Coupled Fracture (MDCF) model, is considerably more complicated than the M-D model and required a different technology from that of the M-D model for a computational implementation

  14. Short-term effects of implemented high intensity shoulder elevation during computer work

    DEFF Research Database (Denmark)

    Larsen, Mette K.; Samani, Afshin; Madeleine, Pascal

    2009-01-01

    computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a pause with preceding high intensity contraction requires further investigation before high intensity shoulder elevations can......BACKGROUND: Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary...... contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction...

  15. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mathematics; Anderson, D.R. [Sandia National Labs., Albuquerque, NM (United States). WIPP Performance Assessments Departments; Baker, B.L. [Technadyne Engineering Consultants, Albuquerque, NM (United States)] [and others

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs.

  16. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs

  17. Implementation of generalized measurements with minimal disturbance on a quantum computer

    International Nuclear Information System (INIS)

    Decker, T.; Grassl, M.

    2006-01-01

    We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  18. How to Implement Rigorous Computer Science Education in K-12 Schools? Some Answers and Many Questions

    Science.gov (United States)

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.

    2015-01-01

    Aiming to collect various concepts, approaches, and strategies for improving computer science education in K-12 schools, we edited this second special issue of the "ACM TOCE" journal. Our intention was to collect a set of case studies from different countries that would describe all relevant aspects of specific implementations of…

  19. Designing reversible arithmetic, logic circuit to implement micro-operation in quantum computation

    International Nuclear Information System (INIS)

    Kalita, Gunajit; Saikia, Navajit

    2016-01-01

    The futuristic computing is desired to be more power full with low-power consumption. That is why quantum computing has been a key area of research for quite some time and is getting more and more attention. Quantum logic being reversible, a significant amount of contributions has been reported on reversible logic in recent times. Reversible circuits are essential parts of quantum computers, and hence their designs are of great importance. In this paper, designs of reversible circuits are proposed using a recently proposed reversible gate for arithmetic and logic operations to implement various micro-operations (simple add and subtract, add with carry, subtract with borrow, transfer, incrementing, decrementing etc., and logic operations like XOR, XNOR, complementing etc.) in a reversible computer like quantum computer. The two new reversible designs proposed here for half adder and full adders are also used in the presented reversible circuits to implement various microoperations. The quantum costs of these designs are comparable. Many of the implemented micro-operations are not seen in previous literatures. The performances of the proposed circuits are compared with existing designs wherever available. (paper)

  20. Capabilities and Advantages of Cloud Computing in the Implementation of Electronic Health Record.

    Science.gov (United States)

    Ahmadi, Maryam; Aslani, Nasim

    2018-01-01

    With regard to the high cost of the Electronic Health Record (EHR), in recent years the use of new technologies, in particular cloud computing, has increased. The purpose of this study was to review systematically the studies conducted in the field of cloud computing. The present study was a systematic review conducted in 2017. Search was performed in the Scopus, Web of Sciences, IEEE, Pub Med and Google Scholar databases by combination keywords. From the 431 article that selected at the first, after applying the inclusion and exclusion criteria, 27 articles were selected for surveyed. Data gathering was done by a self-made check list and was analyzed by content analysis method. The finding of this study showed that cloud computing is a very widespread technology. It includes domains such as cost, security and privacy, scalability, mutual performance and interoperability, implementation platform and independence of Cloud Computing, ability to search and exploration, reducing errors and improving the quality, structure, flexibility and sharing ability. It will be effective for electronic health record. According to the findings of the present study, higher capabilities of cloud computing are useful in implementing EHR in a variety of contexts. It also provides wide opportunities for managers, analysts and providers of health information systems. Considering the advantages and domains of cloud computing in the establishment of HER, it is recommended to use this technology.

  1. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  2. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  3. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  4. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  5. Speed challenge: a case for hardware implementation in soft-computing

    Science.gov (United States)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  6. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  7. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    Science.gov (United States)

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  8. Implementation of SSYST-1 on the GRS computer and first verification calculations

    International Nuclear Information System (INIS)

    Schubert, J.D.; Ullrich, R.

    1981-09-01

    The program system SSYST-1, being developed in Karlsruhe, has been implemented on the AMDAHL-computer together with special modulus for the items eccentric stress and probabilistic analysis. First computations for the REBEKA-3 experiment and other test examples, done to verify the new implementation, showed satisfactory results, especially a good correspondence with measurements for the instant of bursting, the bursting temperature and the difference in temperature on the periphere. Initial difficulties arised from using the model for circumvariable stress and temperature analyses. The reason was, that this modulus is meant for the program SSYST-2, thus its use in SSYST-1 led to interface problems which, however, are resolved now. (orig.) [de

  9. Computational Fluid Dynamics Simulation of Combustion Instability in Solid Rocket Motor : Implementation of Pressure Coupled Response Function

    OpenAIRE

    S. Saha; D. Chakraborty

    2016-01-01

    Combustion instability in solid propellant rocket motor is numerically simulated by implementing propellant response function with quasi steady homogeneous one dimensional formulation. The convolution integral of propellant response with pressure history is implemented through a user defined function in commercial computational fluid dynamics software. The methodology is validated against literature reported motor test and other simulation results. Computed amplitude of pressure fluctuations ...

  10. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  11. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  12. Design and implementation of the one-step MSD adder of optical computer.

    Science.gov (United States)

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  13. IMPLEMENTING THE COMPUTER-BASED NATIONAL EXAMINATION IN INDONESIAN SCHOOLS: THE CHALLENGES AND STRATEGIES

    Directory of Open Access Journals (Sweden)

    Heri Retnawati

    2017-12-01

    Full Text Available In line with technological development, the computer-based national examination (CBNE has become an urgent matter as its implementation faces various challenges, especially in developing countries. Strategies in implementing CBNE are thus needed to face the challenges. The aim of this research was to analyse the challenges and strategies of Indonesian schools in implementing CBNE. This research was qualitative phenomenological in nature. The data were collected through a questionnaire and a focus group discussion. The research participants were teachers who were test supervisors and technicians at junior high schools and senior high schools (i.e. Level 1 and 2 and vocational high schools implementing CBNE in Yogyakarta, Indonesia. The data were analysed using the Bogdan and Biklen model. The results indicate that (1 in implementing CBNE, the schools should initially make efforts to provide the electronic equipment supporting it; (2 the implementation of CBNE is challenged by problems concerning the Internet and the electricity supply; (3 the test supervisors have to learn their duties by themselves and (4 the students are not yet familiar with the beneficial use of information technology. To deal with such challenges, the schools employed strategies by making efforts to provide the standard electronic equipment through collaboration with the students’ parents and improving the curriculum content by adding information technology as a school subject.

  14. Implementation and adaption of the Computer Code ECOSYS/EXCEL for Austria as OECOSYS/EXCEL

    International Nuclear Information System (INIS)

    Hick, H.; Suda, M.; Mueck, K.

    1998-03-01

    During 1989, under contract to the Austrian Chamber of the Federal Chancellor, department VII, the radioecological forecast model OECOSYS was implemented by the Austrian Research Centre Seibersdorf on a VAX computer using VAX Fortran. OECOSYS allows the prediction of the consequences after a large scale contamination event. During 1992, under contract to the Austrian Federal Ministry of Health, Sports and Consumer Protection, department III OECOSYS - in the version of 1989 - was implemented on PC's in Seibersdorf and the Ministry using OS/2 and Microsoft -Fortran. In March 1993, the Ministry ordered an update which had become necessary and the evaluation of two exercise scenarios. Since that time the prognosis model with its auxiliary program and communication facilities is kept on stand-by and yearly exercises are performed to maintain its readiness. The current report describes the implementation and adaption to Austrian conditions of the newly available EXCEL version of the German ECOSYS prognosis model as OECOSYS. (author)

  15. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  16. Implementation of a solution Cloud Computing with MapReduce model

    International Nuclear Information System (INIS)

    Baya, Chalabi

    2014-01-01

    In recent years, large scale computer systems have emerged to meet the demands of high storage, supercomputing, and applications using very large data sets. The emergence of Cloud Computing offers the potentiel for analysis and processing of large data sets. Mapreduce is the most popular programming model which is used to support the development of such applications. It was initially designed by Google for building large datacenters on a large scale, to provide Web search services with rapid response and high availability. In this paper we will test the clustering algorithm K-means Clustering in a Cloud Computing. This algorithm is implemented on MapReduce. It has been chosen for its characteristics that are representative of many iterative data analysis algorithms. Then, we modify the framework CloudSim to simulate the MapReduce execution of K-means Clustering on different Cloud Computing, depending on their size and characteristics of target platforms. The experiment show that the implementation of K-means Clustering gives good results especially for large data set and the Cloud infrastructure has an influence on these results

  17. Parallel Implementation of Triangular Cellular Automata for Computing Two-Dimensional Elastodynamic Response on Arbitrary Domains

    Science.gov (United States)

    Leamy, Michael J.; Springer, Adam C.

    In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.

  18. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  19. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  20. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  1. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road

    Science.gov (United States)

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on “on-demand payment” for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. PMID:26230400

  2. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Directory of Open Access Journals (Sweden)

    Iñaki Bildosola

    Full Text Available Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  3. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Science.gov (United States)

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  4. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  5. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    Science.gov (United States)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  6. COMPUTER EVALUATION OF SKILLS FORMATION QUALITY IN THE IMPLEMENTATION OF COMPETENCE-BASED APPROACH TO LEARNING

    Directory of Open Access Journals (Sweden)

    Vitalia A. Zhuravleva

    2014-01-01

    Full Text Available The article deals with the problem of effective organization of skills forming as an important part of the competence approach in education, implemented via educational standards of new generation. The solution of the problem suggests using of computer tools to assess the quality of skills formation and abilities based on the proposed model of the problem. This paper proposes an approach to creating an assessing model of the level of skills formation in knowledge management systems based on mathematical modeling methods. Attention is paid to the evaluation strategy and technology of assessment, which is based on the use of rules of fuzzy mathematics. Algorithmic implementation of the proposed model of evaluation of the quality of skills development is shown as well. 

  7. Implementing of AMPX-II system for a univac computer neutron cross-section libraries

    International Nuclear Information System (INIS)

    Sancho, J.; Verdu, G.; Serradell, V.

    1984-01-01

    The AMPX-II system, developed at ORNL, is constituted by a modular set of computer programs, for generation and handling of several nuclear data libraries. The processing starts from ENDF/B library. Along this paper, we refer mainly to the modules related with neutron cross section libraries: master, working and weighted. These modules have been implemented recently for a UNIVAC 1100/60 computer in the Universidad Politecnica de Valencia (Spain). In order to run the programs in that machine it has been necessary to introduce a number of modifications into their programing structure. The main difficulties found in this work and the need of verification for the new versions are also pointed out. We also refer to the results obtained from the execution of a set of little sample problems. (author)

  8. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    Science.gov (United States)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  9. A Dynamic Object Behavior Model and Implementation Based on Computational Reflection

    Institute of Scientific and Technical Information of China (English)

    HE Cheng-wan; HE Fei; HE Ke-qing

    2005-01-01

    A dynamic object behavior model based on computational reflection is proposed. This model consists of function level and meta level, the meta objects in meta level manage the base objects and behaviors in function level, including dynamic binding and unbinding of base object and behavior.We implement this model with RoleJava Language, which is our self linguistic extension of the Java Language. Meta Objects are generated automatically at compile-time, this makes the reflecton mechanism transparent to programmers. Finally an example applying this model to a banking system is presented.

  10. EXPERIMENTAL AND THEORETICAL FOUNDATIONS AND PRACTICAL IMPLEMENTATION OF TECHNOLOGY BRAIN-COMPUTER INTERFACE

    Directory of Open Access Journals (Sweden)

    A. Ya. Kaplan

    2013-01-01

    Full Text Available Technology brain-computer interface (BCI allow saperson to learn how to control external devices via thevoluntary regulation of own EEG directly from the brain without the involvement in the process of nerves and muscles. At the beginning the main goal of BCI was to replace or restore motor function to people disabled by neuromuscular disorders. Currently, the task of designing the BCI increased significantly, more capturing different aspects of life a healthy person. This article discusses the theoretical, experimental and technological base of BCI development and systematized critical fields of real implementation of these technologies.

  11. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    Directory of Open Access Journals (Sweden)

    Sergio Ortega

    2010-12-01

    Full Text Available This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes.

  12. A Computationally Efficient and Robust Implementation of the Continuous-Discrete Extended Kalman Filter

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik

    2007-01-01

    for nonlinear stochastic continuous-discrete time systems is more than two orders of magnitude faster than a conventional implementation. This is of significance in nonlinear model predictive control applications, statistical process monitoring as well as grey-box modelling of systems described by stochastic......We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...

  13. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  14. An embedded implementation based on adaptive filter bank for brain-computer interface systems.

    Science.gov (United States)

    Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui

    2018-07-15

    Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  16. Implementation of distributed computing system for emergency response and contaminant spill monitoring

    International Nuclear Information System (INIS)

    Ojo, T.O.; Sterling, M.C.Jr.; Bonner, J.S.; Fuller, C.B.; Kelly, F.; Page, C.A.

    2003-01-01

    The availability and use of real-time environmental data greatly enhances emergency response and spill monitoring in coastal and near shore environments. The data would include surface currents, wind speed, wind direction, and temperature. Model predictions (fate and transport) or forensics can also be included. In order to achieve an integrated system suitable for application in spill or emergency response situations, a link is required because this information exists on many different computing platforms. When real-time measurements are needed to monitor a spill, the use of a wide array of sensors and ship-based post-processing methods help reduce the latency in data transfer between field sampling stations and the Incident Command Centre. The common thread linking all these modules is the Transmission Control Protocol/Internet Protocol (TCP/IP), and the result is an integrated distributed computing system (DCS). The in-situ sensors are linked to an onboard computer through the use of a ship-based local area network (LAN) using a submersible device server. The onboard computer serves as both the data post-processor and communications server. It links the field sampling station with other modules, and is responsible for transferring data to the Incident Command Centre. This link is facilitated by a wide area network (WAN) based on wireless broadband communications facilities. This paper described the implementation of the DCS. The test results for the communications link and system readiness were also included. 6 refs., 2 tabs., 3 figs

  17. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  18. Framework and implementation for improving physics essential skills via computer-based practice: Vector math

    Science.gov (United States)

    Mikula, Brendon D.; Heckler, Andrew F.

    2017-06-01

    We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with a careful identification of target skills and the study of specific student difficulties with these skills. It then employs computer-based instruction, immediate feedback, mastery grading, and well-researched principles from cognitive psychology such as interleaved training sequences and distributed practice. We implemented this with more than 1500 students over 2 semesters. Students completed the mastery practice for an average of about 13 min /week , for a total of about 2-3 h for the whole semester. Results reveal large (>1 SD ) pretest to post-test gains in accuracy in vector skills, even compared to a control group, and these gains were retained at least 2 months after practice. We also find evidence of improved fluency, student satisfaction, and that awarding regular course credit results in higher participation and higher learning gains than awarding extra credit. In all, we find that simple computer-based mastery practice is an effective and efficient way to improve a set of basic and essential skills for introductory physics.

  19. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    Directory of Open Access Journals (Sweden)

    E. A. Panidi

    2016-01-01

    Full Text Available The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes some contemporary concepts and technologies used for distributed data storage and management, which provide combined use of server-side and client-side resources. In particular, the concepts of Cloud Computing, Fog Computing, and Internet of Things, also with Java Web Start, WebRTC and WebTorrent technologies are mentioned. The author's experience is described briefly, which incorporates the number of projects devoted to the development of the portable solutions for geospatial data and GIS software publication on the Global Network.

  20. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    Science.gov (United States)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  1. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  2. A META-MODELLING SERVICE PARADIGM FOR CLOUD COMPUTING AND ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    F. Cheng

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:Service integrators seek opportunities to align the way they manage resources in the service supply chain. Many business organisations can operate new, more flexible business processes that harness the value of a service approach from the customer’s perspective. As a relatively new concept, cloud computing and related technologies have rapidly gained momentum in the IT world. This article seeks to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions: service supply chain architecture from a service perspective; the basic clouds of service supply chain; managerial insights into these clouds; and the commercial value of implementing cloud computing. In particular, to show how those services can be used, and involved in their utilisation processes, a hypothetical meta-modelling service of cloud computing is proposed. Moreover, the paper defines the managed cloud architecture for a service vendor or service integrator in the cloud computing infrastructure in the service supply chain: IT services, business services, business processes, which create atomic and composite software services that are used to perform business processes with business service choreographies.

    AFRIKAANSE OPSOMMING: Diensintegreeders is op soek na geleenthede om die bestuur van hulpbronne in die diensketting te belyn. Talle organisasies kan nuwe, meer buigsame besigheidprosesse, wat die waarde van ‘n diensaanslag uit die kliënt se oogpunt inspan, gebruik. As ‘n relatiewe nuwe konsep het wolkberekening en verwante tegnologie vinnig momentum gekry in die IT-wêreld. Die artikel poog om lig te werp op kwessies van die diensketting wat verband hou met wolkberekening deur verskeie verwante vrae te ondersoek: dienkettingargitektuur uit ‘n diensoogpunt; die basiese wolk van die diensketting; bestuursinsigte oor sodanige wolke; en die kommersiële waarde van die implementering van

  3. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, Andreas [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Kreutzer, Moritz [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Galgon, Martin [Bergische Universität Wuppertal (Germany); Fehske, Holger [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Hager, Georg [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Lang, Bruno [Bergische Universität Wuppertal (Germany); Wellein, Gerhard [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need for matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.

  4. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    International Nuclear Information System (INIS)

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-01-01

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities

  5. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  6. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  7. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  8. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 - Robust Tools. Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 - Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 - Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  9. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2--Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  10. INTEGRATION OF ECONOMIC AND COMPUTER SKILLS AT IMPLEMENTATION OF STUDENTS PROJECT «BUSINESS PLAN PRODUCING IN MICROSOFT WORD»

    Directory of Open Access Journals (Sweden)

    Y.B. Samchinska

    2012-07-01

    Full Text Available In the article expedience at implementation of economic specialities by complex students project on Informatics and Computer Sciences is grounded on creation of business plan by modern information technologies, and also methodical recommendations are presented on implementation of this project.

  11. Introductory Molecular Orbital Theory: An Honors General Chemistry Computational Lab as Implemented Using Three-Dimensional Modeling Software

    Science.gov (United States)

    Ruddick, Kristie R.; Parrill, Abby L.; Petersen, Richard L.

    2012-01-01

    In this study, a computational molecular orbital theory experiment was implemented in a first-semester honors general chemistry course. Students used the GAMESS (General Atomic and Molecular Electronic Structure System) quantum mechanical software (as implemented in ChemBio3D) to optimize the geometry for various small molecules. Extended Huckel…

  12. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    International Nuclear Information System (INIS)

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-01-01

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  13. E-pharmacovigilance: development and implementation of a computable knowledge base to identify adverse drug reactions.

    Science.gov (United States)

    Neubert, Antje; Dormann, Harald; Prokosch, Hans-Ulrich; Bürkle, Thomas; Rascher, Wolfgang; Sojer, Reinhold; Brune, Kay; Criegee-Rieck, Manfred

    2013-09-01

    Computer-assisted signal generation is an important issue for the prevention of adverse drug reactions (ADRs). However, due to poor standardization of patients' medical data and a lack of computable medical drug knowledge the specificity of computerized decision support systems for early ADR detection is too low and thus those systems are not yet implemented in daily clinical practice. We report on a method to formalize knowledge about ADRs based on the Summary of Product Characteristics (SmPCs) and linking them with structured patient data to generate safety signals automatically and with high sensitivity and specificity. A computable ADR knowledge base (ADR-KB) that inherently contains standardized concepts for ADRs (WHO-ART), drugs (ATC) and laboratory test results (LOINC) was built. The system was evaluated in study populations of paediatric and internal medicine inpatients. A total of 262 different ADR concepts related to laboratory findings were linked to 212 LOINC terms. The ADR knowledge base was retrospectively applied to a study population of 970 admissions (474 internal and 496 paediatric patients), who underwent intensive ADR surveillance. The specificity increased from 7% without ADR-KB up to 73% in internal patients and from 19.6% up to 91% in paediatric inpatients, respectively. This study shows that contextual linkage of patients' medication data with laboratory test results is a useful and reasonable instrument for computer-assisted ADR detection and a valuable step towards a systematic drug safety process. The system enables automated detection of ADRs during clinical practice with a quality close to intensive chart review. © 2013 The Authors. British Journal of Clinical Pharmacology © 2013 The British Pharmacological Society.

  14. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  15. Implementing finite state machines in a computer-based teaching system

    Science.gov (United States)

    Hacker, Charles H.; Sitte, Renate

    1999-09-01

    Finite State Machines (FSM) are models for functions commonly implemented in digital circuits such as timers, remote controls, and vending machines. Teaching FSM is core in the curriculum of many university digital electronic or discrete mathematics subjects. Students often have difficulties grasping the theoretical concepts in the design and analysis of FSM. This has prompted the author to develop an MS-WindowsTM compatible software, WinState, that provides a tutorial style teaching aid for understanding the mechanisms of FSM. The animated computer screen is ideal for visually conveying the required design and analysis procedures. WinState complements other software for combinatorial logic previously developed by the author, and enhances the existing teaching package by adding sequential logic circuits. WinState enables the construction of a students own FSM, which can be simulated, to test the design for functionality and possible errors.

  16. Development of Point Kernel Shielding Analysis Computer Program Implementing Recent Nuclear Data and Graphic User Interfaces

    International Nuclear Information System (INIS)

    Kang, Sang Ho; Lee, Seung Gi; Chung, Chan Young; Lee, Choon Sik; Lee, Jai Ki

    2001-01-01

    In order to comply with revised national regulationson radiological protection and to implement recent nuclear data and dose conversion factors, KOPEC developed a new point kernel gamma and beta ray shielding analysis computer program. This new code, named VisualShield, adopted mass attenuation coefficient and buildup factors from recent ANSI/ANS standards and flux-to-dose conversion factors from the International Commission on Radiological Protection (ICRP) Publication 74 for estimation of effective/equivalent dose recommended in ICRP 60. VisualShield utilizes graphical user interfaces and 3-D visualization of the geometric configuration for preparing input data sets and analyzing results, which leads users to error free processing with visual effects. Code validation and data analysis were performed by comparing the results of various calculations to the data outputs of previous programs such as MCNP 4B, ISOSHLD-II, QAD-CGGP, etc

  17. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    Science.gov (United States)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  18. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)

    2015-09-15

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  19. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology.

    Science.gov (United States)

    Limkin, E J; Sun, R; Dercle, L; Zacharaki, E I; Robert, C; Reuzé, S; Schernberg, A; Paragios, N; Deutsch, E; Ferté, C

    2017-06-01

    Medical image processing and analysis (also known as Radiomics) is a rapidly growing discipline that maps digital medical images into quantitative data, with the end goal of generating imaging biomarkers as decision support tools for clinical practice. The use of imaging data from routine clinical work-up has tremendous potential in improving cancer care by heightening understanding of tumor biology and aiding in the implementation of precision medicine. As a noninvasive method of assessing the tumor and its microenvironment in their entirety, radiomics allows the evaluation and monitoring of tumor characteristics such as temporal and spatial heterogeneity. One can observe a rapid increase in the number of computational medical imaging publications-milestones that have highlighted the utility of imaging biomarkers in oncology. Nevertheless, the use of radiomics as clinical biomarkers still necessitates amelioration and standardization in order to achieve routine clinical adoption. This Review addresses the critical issues to ensure the proper development of radiomics as a biomarker and facilitate its implementation in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. THE CONCEPT OF THE EDUCATIONAL COMPUTER MATHEMATICS SYSTEM AND EXAMPLES OF ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    M. Lvov

    2014-11-01

    Full Text Available The article deals with the educational computer mathematics system, based in Kherson State University and resulted in more than 8 software tools to orders of the Ministry of Education, Science, Youth and Sports of Ukraine. The exact and natural sciences are notable among all disciplines both in secondary schools and universities. They form the fundamental scientific knowledge, based on precise mathematical models and methods. The educational process for these courses should include not only lectures and seminars, but active forms of studying as well: practical classes, laboratory work, practical training, etc. The enumerated peculiarities determine specific intellectual and architectural properties of information technologies, developed to be used in the educational process of these disciplines. Whereas, in terms of technologies used in the implementation of the functionality of software, they are actually the educational computer algebra system. Thus the algebraic programming system APS developed in the Institute of Cybernetics of the National Academy of Sciences of Ukraine led by Academician O.A. Letychevskyi in the 80 years of the twentieth century is especially important for their development.

  1. Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

    Directory of Open Access Journals (Sweden)

    Jack Burston

    2014-09-01

    Full Text Available This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2 English ability of incoming students is by means of a computer-based test since such evaluation can be administered quickly, automatically corrected, and the outcome known as soon as the test is completed. While the option of using a commercial CAT is available to institutions with the ability to pay substantial annual fees, or the means of passing these expenses on to their students, language instructors without these resources can only avail themselves of the advantages of CAT evaluation by creating their own tests.  As is demonstrated by the E-CAT project described in this paper, this is a viable alternative even for those lacking any computer programing expertise.  However, language teaching experience and testing expertise are critical to such an undertaking, which requires considerable effort and, above all, collaborative teamwork to succeed. A number of practical skills are also required. Firstly, the operation of a CAT authoring programme must be learned. Once this is done, test makers must master the art of creating a question database and assigning difficulty levels to test items. Lastly, if multimedia resources are to be exploited in a CAT, test creators need to be able to locate suitable copyright-free resources and re-edit them as needed.

  2. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    Science.gov (United States)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A

  3. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  4. Examining Behavioral Consultation plus Computer-Based Implementation Planning on Teachers' Intervention Implementation in an Alternative School

    Science.gov (United States)

    Long, Anna C. J.; Sanetti, Lisa M. Hagermoser; Lark, Catherine R.; Connolly, Jennifer J. G.

    2018-01-01

    Students who demonstrate the most challenging behaviors are at risk of school failure and are often placed in alternative schools, in which a primary goal is remediating behavioral and academic concerns to facilitate students' return to their community school. Consistently implemented evidence-based classroom management is necessary toward this…

  5. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  6. Socio-Technical Implementation: Socio-technical Systems in the Context of Ubiquitous Computing, Ambient Intelligence, Embodied Virtuality, and the Internet of Things

    NARCIS (Netherlands)

    Nijholt, Antinus; Whitworth, B.; de Moor, A.

    2009-01-01

    In which computer science world do we design and implement our socio-technical systems? About every five or ten years new computer and interaction paradigms are introduced. We had the mainframe computers, the various generations of computers, including the Japanese fifth generation computers, the

  7. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  8. Development of tight-binding based GW algorithm and its computational implementation for graphene

    International Nuclear Information System (INIS)

    Majidi, Muhammad Aziz; Naradipa, Muhammad Avicenna; Phan, Wileam Yonatan; Syahroni, Ahmad; Rusydi, Andrivo

    2016-01-01

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  9. Implementation of relational data base management systems on micro-computers

    International Nuclear Information System (INIS)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments, in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved

  10. Computational design of RNA parts, devices, and transcripts with kinetic folding algorithms implemented on multiprocessor clusters.

    Science.gov (United States)

    Thimmaiah, Tim; Voje, William E; Carothers, James M

    2015-01-01

    With progress toward inexpensive, large-scale DNA assembly, the demand for simulation tools that allow the rapid construction of synthetic biological devices with predictable behaviors continues to increase. By combining engineered transcript components, such as ribosome binding sites, transcriptional terminators, ligand-binding aptamers, catalytic ribozymes, and aptamer-controlled ribozymes (aptazymes), gene expression in bacteria can be fine-tuned, with many corollaries and applications in yeast and mammalian cells. The successful design of genetic constructs that implement these kinds of RNA-based control mechanisms requires modeling and analyzing kinetically determined co-transcriptional folding pathways. Transcript design methods using stochastic kinetic folding simulations to search spacer sequence libraries for motifs enabling the assembly of RNA component parts into static ribozyme- and dynamic aptazyme-regulated expression devices with quantitatively predictable functions (rREDs and aREDs, respectively) have been described (Carothers et al., Science 334:1716-1719, 2011). Here, we provide a detailed practical procedure for computational transcript design by illustrating a high throughput, multiprocessor approach for evaluating spacer sequences and generating functional rREDs. This chapter is written as a tutorial, complete with pseudo-code and step-by-step instructions for setting up a computational cluster with an Amazon, Inc. web server and performing the large numbers of kinefold-based stochastic kinetic co-transcriptional folding simulations needed to design functional rREDs and aREDs. The method described here should be broadly applicable for designing and analyzing a variety of synthetic RNA parts, devices and transcripts.

  11. MODERN ADVANCES IMPLEMENTATION FOR A PASTROL VENTURE MODELS OF NOVEL CLOUD COMPUTING

    OpenAIRE

    Sandeep Kumar* Ankur Goel

    2018-01-01

    In this paper nnovations are expected to affect the progress in environment. A majority of enterprises are effecting to cut back their computing cost from the options for virtualization. This need for lowering the computing cost has ended in the innovation of Cloud Computing. Cloud Computing offers better computing through improved utilization and reduced administration and infrastructure cost. Cloud Computing is separated around the world in distinguish format. This is the schema to emerge h...

  12. A SCILAB Program for Computing General-Relativistic Models of Rotating Neutron Stars by Implementing Hartle's Perturbation Method

    Science.gov (United States)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.

  13. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    Directory of Open Access Journals (Sweden)

    Ju-Chi Liu

    2016-01-01

    Full Text Available A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI. The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN, and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM and accuracy-recognition mode (AM, were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR. When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  14. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    Science.gov (United States)

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  16. Implementation of Service Learning and Civic Engagement for Computer Information Systems Students through a Course Project at the Hashemite University

    Science.gov (United States)

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2013-01-01

    Service learning methodologies provide information systems students with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study of integrating a service learning project into an undergraduate Computer Information Systems course titled "Information Systems"…

  17. Implementation of depression screening in antenatal clinics through tablet computers: results of a feasibility study.

    Science.gov (United States)

    Marcano-Belisario, José S; Gupta, Ajay K; O'Donoghue, John; Ramchandani, Paul; Morrison, Cecily; Car, Josip

    2017-05-10

    Mobile devices may facilitate depression screening in the waiting area of antenatal clinics. This can present implementation challenges, of which we focused on survey layout and technology deployment. We assessed the feasibility of using tablet computers to administer a socio-demographic survey, the Whooley questions and the Edinburgh Postnatal Depression Scale (EPDS) to 530 pregnant women attending National Health Service (NHS) antenatal clinics across England. We randomised participants to one of two layout versions of these surveys: (i) a scrolling layout where each survey was presented on a single screen; or (ii) a paging layout where only one question appeared on the screen at any given time. Overall, 85.10% of eligible pregnant women agreed to take part. Of these, 90.95% completed the study procedures. Approximately 23% of participants answered Yes to at least one Whooley question, and approximately 13% of them scored 10 points of more on the EPDS. We observed no association between survey layout and the responses given to the Whooley questions, the median EPDS scores, the number of participants at increased risk of self-harm, and the number of participants asking for technical assistance. However, we observed a difference in the number of participants at each EPDS scoring interval (p = 0.008), which provide an indication of a woman's risk of depression. A scrolling layout resulted in faster completion times (median = 4 min 46 s) than a paging layout (median = 5 min 33 s) (p = 0.024). However, the clinical significance of this difference (47.5 s) is yet to be determined. Tablet computers can be used for depression screening in the waiting area of antenatal clinics. This requires the careful consideration of clinical workflows, and technology-related issues such as connectivity and security. An association between survey layout and EPDS scoring intervals needs to be explored further to determine if it corresponds to a survey layout effect

  18. Cost-effectiveness of implementing computed tomography screening for lung cancer in Taiwan.

    Science.gov (United States)

    Yang, Szu-Chun; Lai, Wu-Wei; Lin, Chien-Chung; Su, Wu-Chou; Ku, Li-Jung; Hwang, Jing-Shiang; Wang, Jung-Der

    2017-06-01

    A screening program for lung cancer requires more empirical evidence. Based on the experience of the National Lung Screening Trial (NLST), we developed a method to adjust lead-time bias and quality-of-life changes for estimating the cost-effectiveness of implementing computed tomography (CT) screening in Taiwan. The target population was high-risk (≥30 pack-years) smokers between 55 and 75 years of age. From a nation-wide, 13-year follow-up cohort, we estimated quality-adjusted life expectancy (QALE), loss-of-QALE, and lifetime healthcare expenditures per case of lung cancer stratified by pathology and stage. Cumulative stage distributions for CT-screening and no-screening were assumed equal to those for CT-screening and radiography-screening in the NLST to estimate the savings of loss-of-QALE and additional costs of lifetime healthcare expenditures after CT screening. Costs attributable to screen-negative subjects, false-positive cases and radiation-induced lung cancer were included to obtain the incremental cost-effectiveness ratio from the public payer's perspective. The incremental costs were US$22,755 per person. After dividing this by savings of loss-of-QALE (1.16 quality-adjusted life year (QALY)), the incremental cost-effectiveness ratio was US$19,683 per QALY. This ratio would fall to US$10,947 per QALY if the stage distribution for CT-screening was the same as that of screen-detected cancers in the NELSON trial. Low-dose CT screening for lung cancer among high-risk smokers would be cost-effective in Taiwan. As only about 5% of our women are smokers, future research is necessary to identify the high-risk groups among non-smokers and increase the coverage. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  19. Design, implementation and security of a typical educational laboratory computer network

    Directory of Open Access Journals (Sweden)

    Martin Pokorný

    2013-01-01

    Full Text Available Computer network used for laboratory training and for different types of network and security experiments represents a special environment where hazardous activities take place, which may not affect any production system or network. It is common that students need to have administrator privileges in this case which makes the overall security and maintenance of such a network a difficult task. We present our solution which has proved its usability for more than three years. First of all, four user requirements on the laboratory network are defined (access to educational network devices, to laboratory services, to the Internet, and administrator privileges of the end hosts, and four essential security rules are stipulated (enforceable end host security, controlled network access, level of network access according to the user privilege level, and rules for hazardous experiments, which protect the rest of the laboratory infrastructure as well as the outer university network and the Internet. The main part of the paper is dedicated to a design and implementation of these usability and security rules. We present a physical diagram of a typical laboratory network based on multiple circuits connecting end hosts to different networks, and a layout of rack devices. After that, a topological diagram of the network is described which is based on different VLANs and port-based access control using the IEEE 802.1x/EAP-TLS/RADIUS authentication to achieve defined level of network access. In the second part of the paper, the latest innovation of our network is presented that covers a transition to the system virtualization at the end host devices – inspiration came from a similar solution deployed at the Department of Telecommunications at Brno University of Technology. This improvement enables a greater flexibility in the end hosts maintenance and a simultaneous network access to the educational devices as well as to the Internet. In the end, a vision of a

  20. Computer Games in Pre-School Settings: Didactical Challenges when Commercial Educational Computer Games Are Implemented in Kindergartens

    Science.gov (United States)

    Vangsnes, Vigdis; Gram Okland, Nils Tore; Krumsvik, Rune

    2012-01-01

    This article focuses on the didactical implications when commercial educational computer games are used in Norwegian kindergartens by analysing the dramaturgy and the didactics of one particular game and the game in use in a pedagogical context. Our justification for analysing the game by using dramaturgic theory is that we consider the game to be…

  1. Implementing Computer Algebra Enabled Questions for the Assessment and Learning of Mathematics

    Science.gov (United States)

    Sangwin, Christopher J.; Naismith, Laura

    2008-01-01

    We present principles for the design of an online system to support computer algebra enabled questions for use within the teaching and learning of mathematics in higher education. The introduction of a computer algebra system (CAS) into a computer aided assessment (CAA) system affords sophisticated response processing of student provided answers.…

  2. Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy

    Science.gov (United States)

    Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean

    2007-01-01

    In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…

  3. Computational science and re-discovery: open-source implementation of ellipsoidal harmonics for problems in potential theory

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2012-01-01

    We present two open-source (BSD) implementations of ellipsoidal harmonic expansions for solving problems of potential theory using separation of variables. Ellipsoidal harmonics are used surprisingly infrequently, considering their substantial value for problems ranging in scale from molecules to the entire solar system. In this paper, we suggest two possible reasons for the paucity relative to spherical harmonics. The first is essentially historical—ellipsoidal harmonics developed during the late 19th century and early 20th, when it was found that only the lowest-order harmonics are expressible in closed form. Each higher-order term requires the solution of an eigenvalue problem, and tedious manual computation seems to have discouraged applications and theoretical studies. The second explanation is practical: even with modern computers and accurate eigenvalue algorithms, expansions in ellipsoidal harmonics are significantly more challenging to compute than those in Cartesian or spherical coordinates. The present implementations reduce the 'barrier to entry' by providing an easy and free way for the community to begin using ellipsoidal harmonics in actual research. We demonstrate our implementation using the specific and physiologically crucial problem of how charged proteins interact with their environment, and ask: what other analytical tools await re-discovery in an era of inexpensive computation?

  4. Multiple implementation of a reactor protection code in PHI2, PASCAL, and IFTRAN on the SIEMENS-330 computer

    International Nuclear Information System (INIS)

    Gmeiner, L.; Lemperle, W.; Voges, U.

    1978-01-01

    In safety related computer applications, as in the case of a reactor protection system considered here, mostly multi-computer systems are necessary for reasons of reliability and availability. The hardware structure of the protection system and the software requierements derived from it are explained. In order to study the effects of diversified programming of the three computers the protection codes were implemented in the languages IFTRAN, PASCAL, and PHI2. According to the experience gained diversified programming seems to be a proper means to prevent identical programming errors in all three computers on one hand and to detect ambiguities of the specification on the other. During all of the experiment the errors occurring were recorded in detail and at the moment are being evaluated. (orig./WB) [de

  5. Implementing an interval computation library for OCaml on x86/amd64 architectures

    OpenAIRE

    Alliot , Jean-Marc; Gotteland , Jean-Baptiste; Vanaret , Charlie; Durand , Nicolas; Gianazza , David

    2012-01-01

    International audience; In this paper, we present two implementations of interval arithmetic for OCaml on x86/amd64 architectures. The first implementation is a binding to the classical MPFI/MPFR library. It provides access to multi-precision floating-point arithmetic and multi-precision floating-point intervalarithmetic. The second implementation has been natively written in assembly language for low-level functions and in OCaml for higher-levelfunctions. It has proven as fast as classical C...

  6. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  7. Performance comparison between Java and JNI for optimal implementation of computational micro-kernels

    OpenAIRE

    Halli , Nassim; Charles , Henri-Pierre; Méhaut , Jean-François

    2015-01-01

    International audience; General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits ...

  8. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    Science.gov (United States)

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  9. Models and methods for design and implementation of computer based control and monitoring systems for production cells

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk

    This dissertation is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implementation of computer...... through the implementation of two cell control systems for robot welding cells in production at Odense Steel Shipyard.It is concluded that cell control technology provides for increased performance in production systems, and that the Cell Control Engineering concept reduces the effort for providing high...... quality and high functionality cell control solutions for the industry....

  10. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    Science.gov (United States)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  11. Evaluating the Implementation of International Computing Curricular in African Universities: A Design-Reality Gap Approach

    Science.gov (United States)

    Dasuki, Salihu Ibrahim; Ogedebe, Peter; Kanya, Rislana Abdulazeez; Ndume, Hauwa; Makinde, Julius

    2015-01-01

    Efforts are been made by Universities in developing countries to ensure that it's graduate are not left behind in the competitive global information society; thus have adopted international computing curricular for their computing degree programs. However, adopting these international curricula seem to be very challenging for developing countries…

  12. Longitudinal Study of Factors Impacting the Implementation of Notebook Computer Based CAD Instruction

    Science.gov (United States)

    Goosen, Richard F.

    2009-01-01

    This study provides information for higher education leaders that have or are considering conducting Computer Aided Design (CAD) instruction using student owned notebook computers. Survey data were collected during the first 8 years of a pilot program requiring engineering technology students at a four year public university to acquire a notebook…

  13. Defragging Computer/Videogame Implementation and Assessment in the Social Studies

    Science.gov (United States)

    McBride, Holly

    2014-01-01

    Students in this post-industrial technological age require opportunities for the acquisition of new skills, especially in the marketplace of innovation. A pedagogical strategy that is becoming more and more popular within social studies classrooms is the use of computer and video games as enhancements to everyday lesson plans. Computer/video games…

  14. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    Science.gov (United States)

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  15. Successful Implementation of a Computer-Supported Collaborative Learning System in Teaching E-Commerce

    Science.gov (United States)

    Ngai, E. W. T.; Lam, S. S.; Poon, J. K. L.

    2013-01-01

    This paper describes the successful application of a computer-supported collaborative learning system in teaching e-commerce. The authors created a teaching and learning environment for 39 local secondary schools to introduce e-commerce using a computer-supported collaborative learning system. This system is designed to equip students with…

  16. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    Science.gov (United States)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  17. Implementing a computer-assisted telephone interview (CATI) system to increase colorectal cancer screening: a process evaluation.

    Science.gov (United States)

    White, Mary Jo; Stark, Jennifer R; Luckmann, Roger; Rosal, Milagros C; Clemow, Lynn; Costanza, Mary E

    2006-06-01

    Computer-assisted telephone interviewing (CATI) systems used by telephone counselors (TCs) may be efficient mechanisms to counsel patients on cancer and recommended preventive screening tests in order to extend a primary care provider's reach to his/her patients. The implementation process of such a system for promoting colorectal (CRC) cancer screening using a computer-assisted telephone interview (CATI) system is reported in this paper. The process evaluation assessed three components of the intervention: message production, program implementation and audience reception. Of 1181 potentially eligible patients, 1025 (87%) patients were reached by the TCs and 725 of those patients (71%) were eligible to receive counseling. Five hundred eighty-two (80%) patients agreed to counseling. It is feasible to design and use CATI systems for prevention counseling of patients in primary care practices. CATI systems have the potential of being used as a referral service by primary care providers and health care organizations for patient education.

  18. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    Science.gov (United States)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  19. Implementation of the equivalence theory inside the computational chain DRAGON/DONJON-NDF

    International Nuclear Information System (INIS)

    Dufour, P.

    2005-01-01

    The work accomplished in the scope of this master project consists in introducing the equivalence theory inside the computational schema DRAGON/DONJON-NDF. This theory takes into account the possible discontinuity of the homogeneous flux at the surfaces inside problems that involve an homogenisation procedure. To do it, the theory include new factors called discontinuity factors. These factors give, in theory, more exact solutions. Because we use the cell code DRAGON to generate all our homogeneous parameters we also used DRAGON to compute the heterogeneous surface fluxes which are essential to obtain the discontinuity factors. The project has been divided into two parts. The first part consists in computing the heterogeneous surface fluxes with the cell code DRAGON. For the second part of the project we have performed reactor computations using the code DONJON-NDF (over CANDU-6 geometry) with discontinuity factors and we have compared the results thus obtained with those computed without discontinuity factors.

  20. Computationally efficient implementation of sarse-tap FIR adaptive filters with tap-position control on intel IA-32 processors

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2008-01-01

    This paper presents an computationally ef cient implementation of sparse-tap FIR adaptive lters with tapposition control on Intel IA-32 processors with single-instruction multiple-data (SIMD) capability. In order to overcome randomorder memory access which prevents a ectorization, a blockbased processing and a re-ordering buffer are introduced. A dynamic register allocation and the use of memory-to-register operations help the maximization of the loop-unrolling level. Up to 66percent speedup ...

  1. Incremental cost of department-wide implementation of a picture archiving and communication system and computed radiography.

    Science.gov (United States)

    Pratt, H M; Langlotz, C P; Feingold, E R; Schwartz, J S; Kundel, H L

    1998-01-01

    To determine the incremental cash flows associated with department-wide implementation of a picture archiving and communication system (PACS) and computed radiography (CR) at a large academic medical center. The authors determined all capital and operational costs associated with PACS implementation during an 8-year time horizon. Economic effects were identified, adjusted for time value, and used to calculate net present values (NPVs) for each section of the department of radiology and for the department as a whole. The chest-bone section used the most resources. Changes in cost assumptions for the chest-bone section had a dominant effect on the department-wide NPV. The base-case NPV (i.e., that determined by using the initial assumptions) was negative, indicating that additional net costs are incurred by the radiology department from PACS implementation. PACS and CR provide cost savings only when a 12-year hardware life span is assumed, when CR equipment is removed from the analysis, or when digitized long-term archives are compressed at a rate of 10:1. Full PACS-CR implementation would not provide cost savings for a large, subspecialized department. However, institutions that are committed to CR implementation (for whom CR implementation would represent a sunk cost) or institutions that are able to archive images by using image compression will experience cost savings from PACS.

  2. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  3. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  4. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  5. Tensor Arithmetic, Geometric and Mathematic Principles of Fluid Mechanics in Implementation of Direct Computational Experiments

    Directory of Open Access Journals (Sweden)

    Bogdanov Alexander

    2016-01-01

    Full Text Available The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.

  6. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  7. The Pitzer-Lee-Kesler-Teja (PLKT) Strategy and Its Implementation by Meta-Computing Software

    Czech Academy of Sciences Publication Activity Database

    Smith, W. R.; Lísal, Martin; Missen, R. W.

    2001-01-01

    Roč. 35, č. 1 (2001), s. 68-73 ISSN 0009-2479 Institutional research plan: CEZ:AV0Z4072921 Keywords : The Pitzer -Lee-Kesler-Teja (PLKT) strategy * implementation Subject RIV: CF - Physical ; Theoretical Chemistry

  8. Implementation of GAMMON - An efficient load balancing strategy for a local computer system

    Science.gov (United States)

    Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.

    1989-01-01

    GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.

  9. Implementation of computer-based patient records in primary care: the societal health economic effects.

    OpenAIRE

    Arias-Vimárlund, V.; Ljunggren, M.; Timpka, T.

    1996-01-01

    OBJECTIVE: Exploration of the societal health economic effects occurring during the first year after implementation of Computerised Patient Records (CPRs) at Primary Health Care (PHC) centres. DESIGN: Comparative case studies of practice processes and their consequences one year after CPR implementation, using the constant comparison method. Application of transaction-cost analyses at a societal level on the results. SETTING: Two urban PHC centres under a managed care contract in Ostergötland...

  10. Design and implementation of a medium speed communications interface and protocol for a low cost, refreshed display computer

    Science.gov (United States)

    Phyne, J. R.; Nelson, M. D.

    1975-01-01

    The design and implementation of hardware and software systems involved in using a 40,000 bit/second communication line as the connecting link between an IMLAC PDS 1-D display computer and a Univac 1108 computer system were described. The IMLAC consists of two independent processors sharing a common memory. The display processor generates the deflection and beam control currents as it interprets a program contained in the memory; the minicomputer has a general instruction set and is responsible for starting and stopping the display processor and for communicating with the outside world through the keyboard, teletype, light pen, and communication line. The processing time associated with each data byte was minimized by designing the input and output processes as finite state machines which automatically sequence from each state to the next. Several tests of the communication link and the IMLAC software were made using a special low capacity computer grade cable between the IMLAC and the Univac.

  11. Wearable computing from modeling to implementation of wearable systems based on body sensor networks

    CERN Document Server

    Fortino, Giancarlo; Galzarano, Stefano

    2018-01-01

    This book provides the most up-to-date research and development on wearable computing, wireless body sensor networks, wearable systems integrated with mobile computing, wireless networking and cloud computing. This book has a specific focus on advanced methods for programming Body Sensor Networks (BSNs) based on the reference SPINE project. It features an on-line website (http://spine.deis.unical.it) to support readers in developing their own BSN application/systems and covers new emerging topics on BSNs such as collaborative BSNs, BSN design methods, autonomic BSNs, integration of BSNs and pervasive environments, and integration of BSNs with cloud computing. The book provides a description of real BSN prototypes with the possibility to see on-line demos and download the software to test them on specific sensor platforms and includes case studies for more practical applications. * Provides a future roadmap by learning advanced technology and open research issues * Gathers the background knowledge to tackl...

  12. Computer simulation of processes and work implementation zones at Ukryttya object

    International Nuclear Information System (INIS)

    Klyuchnikov, A.A.; Rud'ko, V.M.; Batij, V.G.; Pavlovskij, L.I.; Podbereznyj, S.S.

    2004-01-01

    Need of wide application of computing graphics is grounded during conversion of Ukryttya object into an ecologically safe system, and some examples are given of its use during the design of project for stabilization of Ukryttya object building structures

  13. Two-Language, Two-Paradigm Introductory Computing Curriculum Model and its Implementation

    OpenAIRE

    Zanev, Vladimir; Radenski, Atanas

    2011-01-01

    This paper analyzes difficulties with the introduction of object-oriented concepts in introductory computing education and then proposes a two-language, two-paradigm curriculum model that alleviates such difficulties. Our two-language, two-paradigm curriculum model begins with teaching imperative programming using Python programming language, continues with teaching object-oriented computing using Java, and concludes with teaching object-oriented data structures with Java.

  14. A Feasibility Study of Implementing a Bring-Your-Own-Computing-Device Policy

    Science.gov (United States)

    2013-12-01

    telecom charges is applicable to a corporate environment that allows for telecommuting or where employees require data access to their devices while...do not want to try to control their students’ computers, but the focus of BYOD in education is generally on educational outcomes (Sweeney, 2012). C...of the computer system, while application software is responsible for controlling the specific command tasks. Therefore, the relationship between

  15. The Implementation of Computer Platform for Foundries Cooperating in a Supply Chain

    Directory of Open Access Journals (Sweden)

    Wilk-Kołodziejczyk D.

    2014-08-01

    Full Text Available This article presents a practical solution in the form of implementation of agent-based platform for the management of contracts in a network of foundries. The described implementation is a continuation of earlier scientific work in the field of design and theoretical system specification for cooperating companies [1]. The implementation addresses key design assumptions - the system is implemented using multi-agent technology, which offers the possibility of decentralisation and distributed processing of specified contracts and tenders. The implemented system enables the joint management of orders for a network of small and medium-sized metallurgical plants, while providing them with greater competitiveness and the ability to carry out large procurements. The article presents the functional aspects of the system - the user interface and the principle of operation of individual agents that represent businesses seeking potential suppliers or recipients of services and products. Additionally, the system is equipped with a bi-directional agent translating standards based on ontologies, which aims to automate the decision-making process during tender specifications as a response to the request.

  16. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  17. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Goyal, Sandeep K. [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); Institute of Quantum Science and Technology, University of Calgary, Alberta T2N 1N4 (Canada); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2016-05-20

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  18. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    International Nuclear Information System (INIS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-01-01

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  19. Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor

    Science.gov (United States)

    Pustovetov, M. Yu

    2018-03-01

    This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.

  20. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    Science.gov (United States)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  1. The design, marketing, and implementation of online continuing education about computers and nursing informatics.

    Science.gov (United States)

    Sweeney, Nancy M; Saarmann, Lembi; Seidman, Robert; Flagg, Joan

    2006-01-01

    Asynchronous online tutorials using PowerPoint slides with accompanying audio to teach practicing nurses about computers and nursing informatics were designed for this project, which awarded free continuing education units to completers. Participants had control over the advancement of slides, with the ability to repeat when desired. Graphics were kept to a minimum; thus, the program ran smoothly on computers using dial-up modems. The tutorials were marketed in live meetings and through e-mail messages on nursing listservs. Findings include that the enrollment process must be automated and instantaneous, the program must work from every type of computer and Internet connection, marketing should be live and electronic, and workshops should be offered to familiarize nurses with the online learning system.

  2. The Geospatial Data Cloud: An Implementation of Applying Cloud Computing in Geosciences

    Directory of Open Access Journals (Sweden)

    Xuezhi Wang

    2014-11-01

    Full Text Available The rapid growth in the volume of remote sensing data and its increasing computational requirements bring huge challenges for researchers as traditional systems cannot adequately satisfy the huge demand for service. Cloud computing has the advantage of high scalability and reliability, which can provide firm technical support. This paper proposes a highly scalable geospatial cloud platform named the Geospatial Data Cloud, which is constructed based on cloud computing. The architecture of the platform is first introduced, and then two subsystems, the cloud-based data management platform and the cloud-based data processing platform, are described.  ––– This paper was presented at the First Scientific Data Conference on Scientific Research, Big Data, and Data Science, organized by CODATA-China and held in Beijing on 24-25 February, 2014.

  3. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  4. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  5. The European Patent Office and its handling of Computer Implemented Inventions

    CERN Multimedia

    CERN. Geneva; Weber, Georg

    2014-01-01

    Georg Weber joined the EPO in 1988 and is director since more than 10 years. He started his career in the office initially as a patent examiner and worked in different technical areas of chemistry and mechanics. Birger Koblitz is patent examiner at the EPO in Munich in the technical field of computer security. Before joining the office in 2009, he earned a PhD in Experimental Particle Physics from the University of Hamburg, and worked at CERN in the IT department supporting the experiments in their Grid Computing activitie...

  6. Toward Implementing Computer-Assisted Foreign Language Assessment in the Official Spanish University Entrance Examination

    Science.gov (United States)

    Sanz, Ana Gimeno; Pavón, Ana Sevilla

    2015-01-01

    In 2008 the Spanish Government announced the inclusion of an oral section in the foreign language exam of the National University Entrance Examination during the year 2012 (Royal Decree 1892/2008, of 14 November 2008, Ministerio de Educación, Gobierno de España, 2008). Still awaiting the implementation of these changes, and in an attempt to offer…

  7. Implementing a low-latency parallel graphic equalizer with heterogeneous computing

    NARCIS (Netherlands)

    Norilo, Vesa; Verstraelen, Martinus Johannes Wilhelmina; Valimaki, Vesa; Svensson, Peter; Kristiansen, Ulf

    2015-01-01

    This paper describes the implementation of a recently introduced parallel graphic equalizer (PGE) in a heterogeneous way. The control and audio signal processing parts of the PGE are distributed to a PC and to a signal processor, of WaveCore architecture, respectively. This arrangement is

  8. Implementation of Constrained DFT for Computing Charge Transfer Rates within the Projector Augmented Wave Method

    DEFF Research Database (Denmark)

    Melander, Marko; Jónsson, Elvar Örn; Mortensen, Jens Jørgen

    2016-01-01

    molecules to periodic systems in one-, two-, or three-dimensions. As such, this implementation is relevant for a wide variety of applications. We also present how to extract the electronic coupling element and reorganization energy from the resulting diabatic cDFT-PAW wave functions for the parametrization...

  9. Design, Implementation, and Characterization of a Dedicated Breast Computed Mammotomography System for Enhanced Lesion Imaging

    National Research Council Canada - National Science Library

    McKinley, Randolph L

    2006-01-01

    .... Half cone-beam orbits have been implemented and investigated and have indicated they are feasible for a wide range of breast sizes. Future studies will focus on characterizing the system in terms of dose efficiency, contrast sensitivity, and evaluation for a range of breast sizes and compositions. Patient bed optimization will also be investigated.

  10. Implementation of active electrodes on a brain-computer interface and its application as P300 speller

    International Nuclear Information System (INIS)

    Aguero Rojas, Eliecer

    2013-01-01

    A brain computer interface has implemented using open hardware called Modular EEG, created by The OpenEEG Project and distributed by the company Olimex Ltd. That hardware is modified to use active electrodes, instead of passive electrodes, for acquiring electroencephalographic signals. The application has been given to the interface has been a speller P300; for which has used the BC12000 open software that has the necessary configuration for the application. P300 speller has used a protocol in each session so that could be standardize the method to different users. Valuing the results with three neuropsychological tests, was within the objectives; however, has not been achieved by the limitation in time of project implementation. A brain computer interface has been used with passive electrodes; implemented in the same way that the BCI with active electrodes; and has worked better than the interface with active electrodes. One of the major advantages that has been observed of passive electrodes on the actives has been the size of the same, because the liabilities are smaller and therefore, easier to place preventing the hair of the user, which increases the noise in the signal. (author) [es

  11. A Multistep Maturity Model for the Implementation of Electronic and Computable Diagnostic Clinical Prediction Rules (eCPRs).

    Science.gov (United States)

    Corrigan, Derek; McDonnell, Ronan; Zarabzadeh, Atieh; Fahey, Tom

    2015-01-01

    The use of Clinical Prediction Rules (CPRs) has been advocated as one way of implementing actionable evidence-based rules in clinical practice. The current highly manual nature of deriving CPRs makes them difficult to use and maintain. Addressing the known limitations of CPRs requires implementing more flexible and dynamic models of CPR development. We describe the application of Information and Communication Technology (ICT) to provide a platform for the derivation and dissemination of CPRs derived through analysis and continual learning from electronic patient data. We propose a multistep maturity model for constructing electronic and computable CPRs (eCPRs). The model has six levels - from the lowest level of CPR maturity (literaturebased CPRs) to a fully electronic and computable service-oriented model of CPRs that are sensitive to specific demographic patient populations. We describe examples of implementations of the core model components - focusing on CPR representation, interoperability, electronic dissemination, CPR learning, and user interface requirements. The traditional focus on derivation and narrow validation of CPRs has severely limited their wider acceptance. The evolution and maturity model described here outlines a progression toward eCPRs consistent with the vision of a learning health system (LHS) - using central repositories of CPR knowledge, accessible open standards, and generalizable models to avoid repetition of previous work. This is useful for developing more ambitious strategies to address limitations of the traditional CPR development life cycle. The model described here is a starting point for promoting discussion about what a more dynamic CPR development process should look like.

  12. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  13. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  14. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  15. Research and realization implementation of monitor technology on illegal external link of classified computer

    Science.gov (United States)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  16. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    Science.gov (United States)

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  17. Using Maple to Implement eLearning Integrated with Computer Aided Assessment

    Science.gov (United States)

    Blyth, Bill; Labovic, Aleksandra

    2009-01-01

    Advanced mathematics courses have been developed and refined by the first author, using an action research methodology, for more than a decade. These courses use the computer algebra system (CAS) Maple in an "immersion mode" where all presentations and student work are done using Maple. Assignments and examinations are Maple files downloaded from…

  18. Implementing a Computer Program that Captures Students' Work on Customizable, Periodic-System Data Assignments

    Science.gov (United States)

    Wiediger, Susan D.

    2009-01-01

    The periodic table and the periodic system are central to chemistry and thus to many introductory chemistry courses. A number of existing activities use various data sets to model the development process for the periodic table. This paper describes an image arrangement computer program developed to mimic a paper-based card sorting periodic table…

  19. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    International Nuclear Information System (INIS)

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  20. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive

  1. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Phillips, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wampler, Cheryl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meisner, Robert [National Nuclear Security Administration (NNSA), Washington, DC (United States)

    2010-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality, and scientific details); to quantify critical margins and uncertainties; and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  2. The Definition and Implementation of a Computer Programming Language Based on Constraints.

    Science.gov (United States)

    1980-08-01

    though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that IISP , say...and detecting and resolving conflicts, just as iisp provides certain services such as automatic storage management, which records given dala in a...defined- it permits the statement of equalities and some simple arithmetic relationships. An implementation representation is chosen, and IISP code for a

  3. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    JIANG Lan

    2013-02-01

    Full Text Available A kind of anti-saturated digital PI regulator is designed and implemented based on DSP.This PI regulator was applied to the system design of voltage and current double-loop control in a BUCK converter and related experimental research was made in a 5.5 KW prototype machine.Experimental results show that the converter has good static and dynamic performances and the validity of the design of the PI regulator is verified.

  4. Using the nursing process to implement a Y2K computer application.

    Science.gov (United States)

    Hobbs, C F; Hardinge, T T

    2000-01-01

    Because of the coming year 2000, the need was assessed to upgrade the order entry system at many hospitals. At Somerset Medical Center, a training team divided the transition into phases and used a modified version of the nursing process to implement the new program. The entire process required fewer than 6 months and was relatively problem-free. This successful transition was aided by the nursing process, training team, and innovative educational techniques.

  5. Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

    OpenAIRE

    FE, JORGE DEOLINDO; Aliaga Varea, Ramón José; Gadea Gironés, Rafael

    2015-01-01

    In the optimization of artificial neural networks (ANNs) via evolutionary algorithms and the implementation of the necessary training for the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism, whereas hardware realizations usually rely on optimizations that reduce the range of applicable network topologies, or they...

  6. Improving the accessibility at home: implementation of a domotic application using a p300-based brain computer interface system

    Directory of Open Access Journals (Sweden)

    Rebeca Corralejo Palacios

    2012-05-01

    Full Text Available The aim of this study was to develop a Brain Computer Interface (BCI application to control domotic devices usually present at home. Previous studies have shown that people with severe disabilities, both physical and cognitive ones, do not achieve high accuracy results using motor imagery-based BCIs. To overcome this limitation, we propose the implementation of a BCI application using P300 evoked potentials, because neither extensive training nor extremely high concentration level are required for this kind of BCIs. The implemented BCI application allows to control several devices as TV, DVD player, mini Hi-Fi system, multimedia hard drive, telephone, heater, fan and lights. Our aim is that potential users, i.e. people with severe disabilities, are able to achieve high accuracy. Therefore, this domotic BCI application is useful to increase

  7. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrickson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  8. Implementation of a pressurized water reactor simulator for teaching on a mini-computer

    International Nuclear Information System (INIS)

    Tallec, Michele.

    1982-06-01

    This paper presents the design of a pressurized water reactor power plant simulator using a mini-computer. This simulator is oriented towards teaching. It operates real-time simulations and many parameters can be changed by the student during execution of the digital code. First, a state variable model of the dynamic behavior of the plant is derived from the physical laws. The second part presents the problems associated with the use of a mini-computer for the resolution of a large differential system, notably the problems of memory-space availability, execution time and numerical integration. Finally, it contains the description of the control deck outlay used to interfer with the digital code, and of the the conditions that can be changed during an excution [fr

  9. Implementation of computer learning cases into the curriculum of internal medicine

    Directory of Open Access Journals (Sweden)

    Fischer, Martin R.

    2005-01-01

    Full Text Available Computer-based interactive clinical cases were introduced in 1999 to improve problem-solving abilities in undergraduate education in internal medicine at the University of Munich; the content of online cases was matched with the main lecture. Course credits were given for the successful processing of four cases; an additional eight cases were offered to the students for voluntary use. Only the required cases were used substantially (between 89% and 95% of all students whereas a minority of students (between 5% and 11% used the cases voluntarily. In spite of this predominantly extrinsic motivation, most students expressed a high level of intrinsic motivation and rated their self-reported learning success as high. The difficulty of cases was rated as appropriate. This was supported by quantitative data on the correctness of students' answers. In summary, the integration of computer-based cases into a face-to-face learning curriculum should be coupled with the course assessment framework.

  10. On a concept of computer game implementation based on a temporal logic

    Science.gov (United States)

    Szymańska, Emilia; Adamek, Marek J.; Mulawka, Jan J.

    2017-08-01

    Time is a concept which underlies all the contemporary civilization. Therefore, it was necessary to create mathematical tools that allow a precise way to describe the complex time dependencies. One such tool is temporal logic. Its definition, description and characteristics will be presented in this publication. Then the authors will conduct a discussion on the usefulness of this tool in context of creating storyline in computer games such as RPG genre.

  11. Implementation of Private Cloud Computing Using Integration of JavaScript and Python

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper deals with the design and deployment of a novel library class in Python, enabling the use of JavaScript functionalities in Application Programming and the leveraging of this Library into development for third generation technologies such as Private Cloud Computing. The integration of these two prevalent languages provides us with a new level of compliance which helps in developing an understanding between Web Programming and Application Programming. An inter-browser functionality wrapping, which would enable users to have a JavaScript experience in Python interfaces directly, without having to depend on external programs, has been developed. The functionality of this concept is prevalent in the fact that Applications written in JavaScript and accessed on the browser now have the capability of interacting with each other on a common platform with the help of a Python wrapper. The idea is demonstrated by the integrating with the now ubiquitous Cloud Computing concept. With the help of examples, we have showcased the same and explained how the Library XOCOM can be a stepping stone to flexible cloud computing environment.

  12. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    International Nuclear Information System (INIS)

    Burns, T.J.

    1994-01-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed

  13. An Implementation of Parallel and Networked Computing Schemes for the Real-Time Image Reconstruction Based on Electrical Tomography

    International Nuclear Information System (INIS)

    Park, Sook Hee

    2001-02-01

    This thesis implements and analyzes the parallel and networked computing libraries based on the multiprocessor computer architecture as well as networked computers, aiming at improving the computation speed of ET(Electrical Tomography) system which requires enormous CPU time in reconstructing the unknown internal state of the target object. As an instance of the typical tomography technology, ET partitions the cross-section of the target object into the tiny elements and calculates the resistivity of them with signal values measured at the boundary electrodes surrounding the surface of the object after injecting the predetermined current pattern through the object. The number of elements is determined considering the trade-off between the accuracy of the reconstructed image and the computation time. As the elements become more finer, the number of element increases, and the system can get the better image. However, the reconstruction time increases polynomially with the number of partitioned elements since the procedure consists of a number of time consuming matrix operations such as multiplication, inverse, pseudo inverse, Jacobian and so on. Consequently, the demand for improving computation speed via multiple processor grows indispensably. Moreover, currently released PCs can be stuffed with up to 4 CPUs interconnected to the shared memory while some operating systems enable the application process to benefit from such computer by allocating the threaded job to each CPU, resulting in concurrent processing. In addition, a networked computing or cluster computing environment is commonly available to almost every computer which contains communication protocol and is connected to local or global network. After partitioning the given job(numerical operation), each CPU or computer calculates the partial result independently, and the results are merged via common memory to produce the final result. It is desirable to adopt the commonly used library such as Matlab to

  14. Research and implementation of PC data synchronous backup based on cloud computing

    OpenAIRE

    WU Yu; CHEN Junhua

    2013-01-01

    In order to better ensure data security,data integrity,and facilitate remote management,this paper has designed and implemented a system model for PC data synchronous backup from the view of the local database and personal data. It focuses on the data backup and uses SQL Azure( a cloud database management system) and Visual Studio( a development platform tool) . Also the system is released and deployed on the Windows Azure Platform with a unique web portal. Experimental tests show that compar...

  15. A Methodology for Decision Support for Implementation of Cloud Computing IT Services

    Directory of Open Access Journals (Sweden)

    Adela Tušanová

    2014-07-01

    Full Text Available The paper deals with the decision of small and medium-sized software companies in transition to SaaS model. The goal of the research is to design a comprehensive methodic to support decision making based on actual data of the company itself. Based on a careful analysis, taxonomy of costs, revenue streams and decision-making criteria are proposed in the paper. On the basis of multi-criteria decision-making methods, each alternative is evaluated and the alternative with the highest score is identified as the most appropriate. The proposed methodic is implemented as a web application and verified through  case studies.

  16. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    Science.gov (United States)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  17. Computer simulation and implementation of defected ground structure on a microstrip antenna

    Science.gov (United States)

    Adrian, H.; Rambe, A. H.; Suherman

    2018-03-01

    Defected Ground Structure (DGS) is a method reducing etching area on antenna ground to form desirable antenna’s ground field. This paper reports the method impact on microstrip antennas working on 1800 and 2400 MHz. These frequencies are important as many radio network applications such mobile phones and wireless devices working on these channels. The assessments were performed by simulating and fabricating the evaluated antennas. Both simulation data and implementation measurements show that DGS successfully improves antenna performances by increasing bandwidth up to 19%, reducing return loss up to 109% and increasing gain up to 33%.

  18. Smart learning objects for smart education in computer science theory, methodology and robot-based implementation

    CERN Document Server

    Stuikys, Vytautas

    2015-01-01

    This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist a

  19. LUDEP 1. 0, a personal computer program to implement the new ICRP respiratory tract model

    Energy Technology Data Exchange (ETDEWEB)

    Jarvis, N.S.; Birchall, A. (National Radiological Protection Board, Chilton (United Kingdom))

    1994-01-01

    The International Commission on Radiological Protection has recently approved a new model of the human respiratory tract. This model has been designed to represent realistically the deposition and biokinetic behaviour of inhaled radionuclides, and to calculate doses to the respiratory tract. In order to examine the practical application and radiological implications of the new model, a Personal Computer program has been developed. LUDEP 1.0 is a user-friendly program for the IBM-compatible PC which enables the user to calculate doses to the respiratory tract and to other organs. (author).

  20. Design and Implementation of 3 Axis CNC Router for Computer Aided Manufacturing Courses

    Directory of Open Access Journals (Sweden)

    Aktan Mehmet Emin

    2016-01-01

    Full Text Available In this paper, it is intended to make the mechanical design of 3 axis Computer Numerical Control (CNC router with linear joints, production of electronic control interface cards and drivers and manufacturing of CNC router system which is a combination of mechanics and electronics. At the same time, interface program has been prepared to control router via USB. The router was developed for educational purpose. In some vocational schools and universities, Computer Aided Manufacturing (CAM courses are though rather theoretical. This situation cause ineffective and temporary learning. Moreover, students at schools which have the opportunity to apply for these systems can face with various dangerous accidents. Because of this situation, these students start to get knowledge about this system for the first time. For the first steps of CNC education, using smaller and less dangerous systems will be easier. A new concept CNC machine and its user interface suitable and profitable for education have been completely designed and realized during this study. To test the validity of the hypothesis which the benefits that may exist on the educational life, enhanced traditional education method with the contribution of the designed machine has been practiced on CAM course students for a semester. At the end of the semester, the new method applied students were more successful in the rate of 27.36 percent both in terms of verbal comprehension and exam grades.

  1. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  2. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    WU Yu

    2013-08-01

    Full Text Available In order to better ensure data security,data integrity,and facilitate remote management,this paper has designed and implemented a system model for PC data synchronous backup from the view of the local database and personal data. It focuses on the data backup and uses SQL Azure( a cloud database management system and Visual Studio( a development platform tool . Also the system is released and deployed on the Windows Azure Platform with a unique web portal. Experimental tests show that compared to other data backup methods in non-cloud environment,the system has certain advantages and research value on mobility,interoperability and data management.

  3. Implementation of advanced finite element technology in structural analysis computer codes

    International Nuclear Information System (INIS)

    Kohli, T.D.; Wiley, J.W.; Koss, P.W.

    1975-01-01

    Advances in finite element technology over the last several years have been rapid and have largely outstripped the ability of general purpose programs in the public domain to assimilate them. As a result, it has become the burden of the structural analyst to incorporate these advances himself. This paper discusses the implementation and extension of specific technological advances in Bechtel structural analysis programs. In general these advances belong in two categories: (1) the finite elements themselves and (2) equation solution algorithms. Improvements in the finite elements involve increased accuracy of the elements and extension of their applicability to various specialized modelling situations. Improvements in solution algorithms have been almost exclusively aimed at expanding problem solving capacity. (Auth.)

  4. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    Science.gov (United States)

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  5. Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations.

    Science.gov (United States)

    Hayes, Andrew F; Matthes, Jörg

    2009-08-01

    Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.

  6. INTERACTIONS: DESIGN, IMPLEMENTATION AND EVALUATION OF A COMPUTATIONAL TOOL FOR TEACHING INTERMOLECULAR FORCES IN HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    Francisco Geraldo Barbosa

    2015-12-01

    Full Text Available Intermolecular forces are a useful concept that can explain the attraction between particulate matter as well as numerous phenomena in our lives such as viscosity, solubility, drug interactions, and dyeing of fibers. However, studies show that students have difficulty understanding this important concept, which has led us to develop a free educational software in English and Portuguese. The software can be used interactively by teachers and students, thus facilitating better understanding. Professors and students, both graduate and undergraduate, were questioned about the software quality and its intuitiveness of use, facility of navigation, and pedagogical application using a Likert scale. The results led to the conclusion that the developed computer application can be characterized as an auxiliary tool to assist teachers in their lectures and students in their learning process of intermolecular forces.

  7. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  8. The Identification, Implementation, and Evaluation of Critical User Interface Design Features of Computer-Assisted Instruction Programs in Mathematics for Students with Learning Disabilities

    Science.gov (United States)

    Seo, You-Jin; Woo, Honguk

    2010-01-01

    Critical user interface design features of computer-assisted instruction programs in mathematics for students with learning disabilities and corresponding implementation guidelines were identified in this study. Based on the identified features and guidelines, a multimedia computer-assisted instruction program, "Math Explorer", which delivers…

  9. Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    Science.gov (United States)

    Dugas, Martin; Trumm, Christoph; Stäbler, Axel; Pander, Ernst; Hundt, Walter; Scheidler, Jurgen; Brüning, Roland; Helmberger, Thomas; Waggershauser, Tobias; Matzko, Matthias; Reiser, Maximillian

    2001-01-01

    Background Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. Methods The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS). It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology) [2]. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. Results During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. Conclusion Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring. PMID:11686856

  10. Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    Directory of Open Access Journals (Sweden)

    Helmberger Thomas

    2001-10-01

    Full Text Available Abstract Background Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. Methods The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS. It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology 2. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. Results During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. Conclusion Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring.

  11. Implementation of combined SVM-algorithm and computer-aided perception feedback for pulmonary nodule detection

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Rannou, Didier; Brennan, Patrick C.

    2012-02-01

    This pilot study examines the effect of a novel decision support system in medical image interpretation. This system is based on combining image spatial frequency properties and eye-tracking data in order to recognize over and under calling errors. Thus, before it can be implemented as a detection aided schema, training is required during which SVMbased algorithm learns to recognize FP from all reported outcomes, and, FN from all unreported prolonged dwelled regions. Eight radiologists inspected 50 PA chest radiographs with the specific task of identifying lung nodules. Twentyfive cases contained CT proven subtle malignant lesions (5-20mm), but prevalence was not known by the subjects, who took part in two sequential reading sessions, the second, without and with support system feedback. MCMR ROC DBM and JAFROC analyses were conducted and demonstrated significantly higher scores following feedback with p values of 0.04, and 0.03 respectively, highlighting significant improvements in radiology performance once feedback was used. This positive effect on radiologists' performance might have important implications for future CAD-system development.

  12. Design and implemention of a multi-functional x-ray computed tomography system

    Science.gov (United States)

    Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin; Zhang, Xiang; Deng, Lin; Chen, Siyu; Jin, Zhao; Li, Zengguang

    2015-10-01

    A powerful volume X-ray tomography system has been designed and constructed to provide an universal tool for the three-dimensional nondestructive testing and investigation of industrial components, automotive, electronics, aerospace components, new materials, etc. The combined system is equipped with two commercial X-ray sources, sharing one flat panel detector of 400mm×400mm. The standard focus 450kV high-energy x-ray source is optimized for complex and high density components such as castings, engine blocks and turbine blades. And the microfocus 225kV x-ray source is to meet the demands of micro-resolution characterization applications. Thus the system's penetration capability allows to scan large objects up to 200mm thick dense materials, and the resolution capability can meet the demands of 20μm microstructure inspection. A high precision 6-axis manipulator system is fitted, capable of offset scanning mode in large field of view requirements. All the components are housed in a room with barium sulphate cement. On the other hand, the presented system expands the scope of applications such as dual energy research and testing. In this paper, the design and implemention of the flexible system is described, as well as the preliminary tomographic imaging results of an automobile engine block.

  13. Design and implementation of a computer based site operations log for the ARM Program

    International Nuclear Information System (INIS)

    Tichler, J.L.; Bernstein, H.J.; Bobrowski, S.F.; Melton, R.B.; Campbell, A.P.; Edwards, D.M.; Kanciruk, P.; Singley, P.T.

    1992-01-01

    The Atmospheric Radiation Measurement (ARM) Program is a Department of Energy (DOE) research effort to reduce the uncertainties found in general circulation and other models due to the effects of clouds and solar radiation. ARM will provide an experimental testbed for the study of important atmospheric effects, particularly cloud and radiative processes, and testing of parameterizations of the processes for use in atmospheric models. The design of the testbed known as the Clouds and Radiation Testbed (CART), calls for five, long-term field data collection sites. The first site, located in the Southern Great Plains (SGP) in Lamont, OK began operation in the spring of 1992. The CART Data Environment (CDE) is the element of the testbed which acquires the basic observations from the instruments and processes them to meet the ARM requirements. A formal design was used to develop a description of the logical requirements for the CDE. This paper discusses the design and prototype implementation of a part of the CDE known as the site operations log, which records metadata defining the environment within which the data produced by the instruments is collected

  14. A comprehensive approach for computation and implementation of efficient electricity transmission network charges

    Energy Technology Data Exchange (ETDEWEB)

    Olmos, Luis; Perez-Arriaga, Ignacio J. [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, Alberto Aguilera, 23, 28015 Madrid (Spain)

    2009-12-15

    This paper presents a comprehensive design of electricity transmission charges that are meant to recover regulated network costs. In addition, these charges must be able to meet a set of inter-related objectives. Most importantly, they should encourage potential network users to internalize transmission costs in their location decisions, while interfering as least as possible with the short-term behaviour of the agents in the power system, since this should be left to regulatory instruments in the operation time range. The paper also addresses all those implementation issues that are essential for the sound design of a system of transmission network charges: stability and predictability of the charges; fair and efficient split between generation and demand charges; temporary measures to account for the low loading of most new lines; number and definition of the scenarios to be employed for the calculation and format of the final charges to be adopted: capacity, energy or per customer charges. The application of the proposed method is illustrated with a realistic numerical example that is based on a single scenario of the 2006 winter peak in the Spanish power system. (author)

  15. Implementation of a computer-controlled monitoring system at the Princeton AVF Cyclotron

    International Nuclear Information System (INIS)

    Moore, W.H.

    1984-01-01

    Stability in the parameters of the beams from cyclotrons is often crucial to the experiments laboratories perform. For example, when running a high-resolution experiment with Princeton's QDDD Spectrograph, there are 42 magnetic elements between the ion source and the detector. Instability or drift in any of these elements can easily nullify the sophisticated dispersion matching and kinematic correction that make such experiments possible with machines. At the Princeton Cyclotron they have purchased a commercial computer-controlled measurement system and interfaced it to 20 elements of their beamline. While this project is still far from complete, the authors have satisfied two of the conditions that must be met for such a system to be useful. These are, firstly, that measurements can be made under the conditions of a working laboratory to 1 part in 100,000, and secondly that the results can be presented in a form useful both to the experimenter concerned with the quality of his data and to the technical staff who must maintain and develop the equipment

  16. Parallel multigrid methods: implementation on message-passing computers and applications to fluid dynamics. A draft

    International Nuclear Information System (INIS)

    Solchenbach, K.; Thole, C.A.; Trottenberg, U.

    1987-01-01

    For a wide class of problems in scientific computing, in particular for partial differential equations, the multigrid principle has proved to yield highly efficient numerical methods. However, the principle has to be applied carefully: if the multigrid components are not chosen adequately with respect to the given problem, the efficiency may be much smaller than possible. This has been demonstrated for many practical problems. Unfortunately, the general theories on multigrid convergence do not give much help in constructing really efficient multigrid algorithms. Although some progress has been made in bridging the gap between theory and practice during the last few years, there are still several theoretical approaches which are misleading rather than helpful with respect to the objective of real efficiency. The research in finding highly efficient algorithms for non-model applications therefore is still a sophisticated mixture of theoretical considerations, a transfer of experiences from model to real life problems and systematical experimental work. The emphasis of the practical research activity today lies - among others - in the following fields: - finding efficient multigrid components for really complex problems, - combining the multigrid approach with advanced discretizative techniques: - constructing highly parallel multigrid algorithms. In this paper, we want to deal mainly with the last topic

  17. Implementation of internet training on posture reform of computer users in iran.

    Science.gov (United States)

    Keykhaie, Zohreh; Zareban, Iraj; Shahrakipoor, Mahnaz; Hormozi, Maryam; Sharifi-Rad, Javad; Masoudi, Gholamreza; Rahimi, Fatemeh

    2014-12-01

    Musculoskeletal disorders are of common problems among computer (PC) users. Training of posture reform plays a significant role in the prevention of the emergence, progression and complications of these diseases. The present research was performed to study the effect of the Internet training on the posture reform of the Internet users working in two Iranian universities including Sistan and Baluchestan University and Islamic Azad University of Zahedan in 2014. This study was a quasi-experimental intervention with control group and conducted in two Iranian universities including Sistan and Baluchestan University and Islamic Azad University of Zahedan. The study was done on 160 PC users in the two groups of intervention (80 people) and control (80 people). Training PowerPoint was sent to the intervention group through the Internet and a post test was given to them after 45 days. Statistical software of SPSS 19 and statistical tests of Kolmogrov, t-test, Fisher Exact test, and correlation coefficient were used for data analysis. After the training, the mean scores of knowledge, attitude, performance and self-efficacy in the intervention group were 24.21 ± 1.34, 38.36 ± 2.89, 7.59 ± 1.16, and 45.06 ± 4.11, respectively (P Internet had a significant impact on the posture reform of the PC users. According to the findings observed, there was a significant relationship between the scores of self-efficacy-performance after training. Therefore, based on the findings of the study, it is suggested that Internet training to increase self-efficacy approach in the successive periods can be effective to reform the postures of PC users.

  18. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    Directory of Open Access Journals (Sweden)

    Alexander I. Zhdanov

    2016-12-01

    Full Text Available This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate of convergence of iterative methods based on normal equations systems. To increase the speed of convergence of iterative methods based on normal equations systems, for solving ill-conditioned problems currently different preconditioners options are used that reduce the condition number of the original system of equations. However, universal preconditioner for all applications does not exist. One of the effective approaches that improve the speed of convergence of the iterative Gauss–Seidel method for normal systems of equations, is to use its version of the block. The disadvantage of the block Gauss–Seidel method for production systems is the fact that it is necessary to calculate the pseudoinverse matrix for each iteration. We know that finding the pseudoinverse is a difficult computational procedure. In this paper, we propose a procedure to replace the matrix pseudo-solutions to the problem of normal systems of equations by Cholesky. Normal equations arising at each iteration of Gauss–Seidel method, have a relatively low dimension compared to the original system. The results of numerical experimentation demonstrating the effectiveness of the proposed approach are given.

  19. Time expenditure in computer aided time studies implemented for highly mechanized forest equipment

    Directory of Open Access Journals (Sweden)

    Elena Camelia Mușat

    2016-06-01

    Full Text Available Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01. Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce

  20. Clinical Implementation of Intrafraction Cone Beam Computed Tomography Imaging During Lung Tumor Stereotactic Ablative Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ruijiang; Han, Bin; Meng, Bowen [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Maxim, Peter G.; Xing, Lei; Koong, Albert C. [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States); Diehn, Maximilian, E-mail: Diehn@Stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States); Institute for Stem Cell Biology and Regenerative Medicine, Stanford University School of Medicine, Stanford, California (United States); Loo, Billy W., E-mail: BWLoo@Stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California (United States); Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California (United States)

    2013-12-01

    Purpose: To develop and clinically evaluate a volumetric imaging technique for assessing intrafraction geometric and dosimetric accuracy of stereotactic ablative radiation therapy (SABR). Methods and Materials: Twenty patients received SABR for lung tumors using volumetric modulated arc therapy (VMAT). At the beginning of each fraction, pretreatment cone beam computed tomography (CBCT) was used to align the soft-tissue tumor position with that in the planning CT. Concurrent with dose delivery, we acquired fluoroscopic radiograph projections during VMAT using the Varian on-board imaging system. Those kilovolt projections acquired during millivolt beam-on were automatically extracted, and intrafraction CBCT images were reconstructed using the filtered backprojection technique. We determined the time-averaged target shift during VMAT by calculating the center of mass of the tumor target in the intrafraction CBCT relative to the planning CT. To estimate the dosimetric impact of the target shift during treatment, we recalculated the dose to the GTV after shifting the entire patient anatomy according to the time-averaged target shift determined earlier. Results: The mean target shift from intrafraction CBCT to planning CT was 1.6, 1.0, and 1.5 mm; the 95th percentile shift was 5.2, 3.1, 3.6 mm; and the maximum shift was 5.7, 3.6, and 4.9 mm along the anterior-posterior, left-right, and superior-inferior directions. Thus, the time-averaged intrafraction gross tumor volume (GTV) position was always within the planning target volume. We observed some degree of target blurring in the intrafraction CBCT, indicating imperfect breath-hold reproducibility or residual motion of the GTV during treatment. By our estimated dose recalculation, the GTV was consistently covered by the prescription dose (PD), that is, V100% above 0.97 for all patients, and minimum dose to GTV >100% PD for 18 patients and >95% PD for all patients. Conclusions: Intrafraction CBCT during VMAT can provide

  1. SIFT - Design and analysis of a fault-tolerant computer for aircraft control. [Software Implemented Fault Tolerant systems

    Science.gov (United States)

    Wensley, J. H.; Lamport, L.; Goldberg, J.; Green, M. W.; Levitt, K. N.; Melliar-Smith, P. M.; Shostak, R. E.; Weinstock, C. B.

    1978-01-01

    SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the processing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described.

  2. Development and implementation of a critical pathway for prevention of adverse reactions to contrast media for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Keun Jo [Presbyterian Medical Center, Seoul (Korea, Republic of); Kweon, Dae Cheol; Kim, Myeong Goo [Seoul National University Hospital, Seoul (Korea, Republic of); Yoo, Beong Gyu [Wonkwang Health Science College, Iksan (Korea, Republic of)

    2007-03-15

    The purpose of this study is to develop a critical pathway (CP) for the prevention of adverse reactions to contrast media for computed tomography. The CP was developed and implemented by a multidisciplinary group is Seoul National University Hospital. The CP was applied to CT patients. Patients who underwent CT scanning were included in the CP group from March in 2004. The satisfaction of the patients with CP was compared with non-CP groups. We also investigated the degree of satisfaction among the radiological technologists and nurses. The degree of patient satisfaction with the care process increased patient information (24%), prevention of adverse reactions to contrast media (19%), pre-cognitive effect of adverse reactions to contrast media (39%) and information degree of adverse reactions to contrast media (19%). This CP program can be used as one of the patient care tools for reducing the adverse reactions to contrast media and increasing the efficiency of care process in CT examination settings.

  3. Development and implementation of a critical pathway for prevention of adverse reactions to contrast media for computed tomography

    International Nuclear Information System (INIS)

    Jang, Keun Jo; Kweon, Dae Cheol; Kim, Myeong Goo; Yoo, Beong Gyu

    2007-01-01

    The purpose of this study is to develop a critical pathway (CP) for the prevention of adverse reactions to contrast media for computed tomography. The CP was developed and implemented by a multidisciplinary group is Seoul National University Hospital. The CP was applied to CT patients. Patients who underwent CT scanning were included in the CP group from March in 2004. The satisfaction of the patients with CP was compared with non-CP groups. We also investigated the degree of satisfaction among the radiological technologists and nurses. The degree of patient satisfaction with the care process increased patient information (24%), prevention of adverse reactions to contrast media (19%), pre-cognitive effect of adverse reactions to contrast media (39%) and information degree of adverse reactions to contrast media (19%). This CP program can be used as one of the patient care tools for reducing the adverse reactions to contrast media and increasing the efficiency of care process in CT examination settings

  4. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study

    Directory of Open Access Journals (Sweden)

    Ahmad Farah

    2012-12-01

    Full Text Available Abstract Background Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA, which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff regarding the factors required for implementation of an interactive HRA on psychosocial health. Methods The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: “What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?” The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as ‘it made sense to them’. Participants also rated each item on a 5-point scale for its ‘importance’ and ‘action feasibility’ over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Results Overall, 54 providers participated (emergency care 48%; primary care 52%. Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis

  5. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 2: Computational implementation and first results

    Science.gov (United States)

    Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina

    2017-11-01

    This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M risk reduction.

  6. Computing the Stackelberg/Nash equilibria using the extraproximal method: Convergence analysis and implementation details for Markov chains games

    Directory of Open Access Journals (Sweden)

    Trejo Kristal K.

    2015-06-01

    Full Text Available In this paper we present the extraproximal method for computing the Stackelberg/Nash equilibria in a class of ergodic controlled finite Markov chains games. We exemplify the original game formulation in terms of coupled nonlinear programming problems implementing the Lagrange principle. In addition, Tikhonov’s regularization method is employed to ensure the convergence of the cost-functions to a Stackelberg/Nash equilibrium point. Then, we transform the problem into a system of equations in the proximal format. We present a two-step iterated procedure for solving the extraproximal method: (a the first step (the extra-proximal step consists of a “prediction” which calculates the preliminary position approximation to the equilibrium point, and (b the second step is designed to find a “basic adjustment” of the previous prediction. The procedure is called the “extraproximal method” because of the use of an extrapolation. Each equation in this system is an optimization problem for which the necessary and efficient condition for a minimum is solved using a quadratic programming method. This solution approach provides a drastically quicker rate of convergence to the equilibrium point. We present the analysis of the convergence as well the rate of convergence of the method, which is one of the main results of this paper. Additionally, the extraproximal method is developed in terms of Markov chains for Stackelberg games. Our goal is to analyze completely a three-player Stackelberg game consisting of a leader and two followers. We provide all the details needed to implement the extraproximal method in an efficient and numerically stable way. For instance, a numerical technique is presented for computing the first step parameter (λ of the extraproximal method. The usefulness of the approach is successfully demonstrated by a numerical example related to a pricing oligopoly model for airlines companies.

  7. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  8. Development and implementation of a low-cost phantom for quality control in cone beam computed tomography

    International Nuclear Information System (INIS)

    Batista, W. O.; Navarro, M. V. T.; Maia, A. F.

    2013-01-01

    A phantom for quality control in cone beam computed tomography (CBCT) scanners was designed and constructed, and a methodology for testing was developed. The phantom had a polymethyl methacrylate structure filled with water and plastic objects that allowed the assessment of parameters related to quality control. The phantom allowed the evaluation of essential parameters in CBCT as well as the evaluation of linear and angular dimensions. The plastics used in the phantom were chosen so that their density and linear attenuation coefficient were similar to those of human facial structures. Three types of CBCT equipment, with two different technological concepts, were evaluated. The results of the assessment of the accuracy of linear and angular dimensions agreed with the existing standards. However, other parameters such as computed tomography number accuracy, uniformity and high-contrast detail did not meet the tolerances established in current regulations or the manufacturer's specifications. The results demonstrate the importance of establishing specific protocols and phantoms, which meet the specificities of CBCT. The practicality of implementation, the quality control test results for the proposed phantom and the consistency of the results using different equipment demonstrate its adequacy. (authors)

  9. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    Science.gov (United States)

    Cowell, Martin Andrew

    spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.

  10. Implementation of a Quadrature Mirror Filter Bank on an SRC Reconfigurable Computer for Real-Time Signal Processing

    National Research Council Canada - National Science Library

    Stoffell, Kevin M

    2006-01-01

    .... Performance and device utilization results between the Quadrature Mirror Filter Bank implemented in VHDL, design elements implemented in the C programming language, and calculations made using high...

  11. Implementation of a computer-aided detection tool for quantification of intracranial radiologic markers on brain CT images

    Science.gov (United States)

    Aghaei, Faranak; Ross, Stephen R.; Wang, Yunzhi; Wu, Dee H.; Cornwell, Benjamin O.; Ray, Bappaditya; Zheng, Bin

    2017-03-01

    Aneurysmal subarachnoid hemorrhage (aSAH) is a form of hemorrhagic stroke that affects middle-aged individuals and associated with significant morbidity and/or mortality especially those presenting with higher clinical and radiologic grades at the time of admission. Previous studies suggested that blood extravasated after aneurysmal rupture was a potentially clinical prognosis factor. But all such studies used qualitative scales to predict prognosis. The purpose of this study is to develop and test a new interactive computer-aided detection (CAD) tool to detect, segment and quantify brain hemorrhage and ventricular cerebrospinal fluid on non-contrasted brain CT images. First, CAD segments brain skull using a multilayer region growing algorithm with adaptively adjusted thresholds. Second, CAD assigns pixels inside the segmented brain region into one of three classes namely, normal brain tissue, blood and fluid. Third, to avoid "black-box" approach and increase accuracy in quantification of these two image markers using CT images with large noise variation in different cases, a graphic User Interface (GUI) was implemented and allows users to visually examine segmentation results. If a user likes to correct any errors (i.e., deleting clinically irrelevant blood or fluid regions, or fill in the holes inside the relevant blood or fluid regions), he/she can manually define the region and select a corresponding correction function. CAD will automatically perform correction and update the computed data. The new CAD tool is now being used in clinical and research settings to estimate various quantitatively radiological parameters/markers to determine radiological severity of aSAH at presentation and correlate the estimations with various homeostatic/metabolic derangements and predict clinical outcome.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  14. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  16. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    Science.gov (United States)

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  17. Towards the blackbox computation of magnetic exchange coupling parameters in polynuclear transition-metal complexes: theory, implementation, and application.

    Science.gov (United States)

    Phillips, Jordan J; Peralta, Juan E

    2013-05-07

    We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.

  18. Implementation of Service Learning and Civic Engagement for Students of Computer Information Systems through a Course Project at the Hashemite University

    Science.gov (United States)

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2015-01-01

    Service learning methodologies provide students of information systems with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study which involves integrating a service learning project into an undergraduate Computer Information Systems course entitled…

  19. Teacher Conceptions and Approaches Associated with an Immersive Instructional Implementation of Computer-Based Models and Assessment in a Secondary Chemistry Classroom

    Science.gov (United States)

    Waight, Noemi; Liu, Xiufeng; Gregorius, Roberto Ma.; Smith, Erica; Park, Mihwa

    2014-01-01

    This paper reports on a case study of an immersive and integrated multi-instructional approach (namely computer-based model introduction and connection with content; facilitation of individual student exploration guided by exploratory worksheet; use of associated differentiated labs and use of model-based assessments) in the implementation of…

  20. An Investigation of Psychological Typology as an Intervening Variable in the Implementation of a Computer Managed Instruction System. Technical Report No. 454.

    Science.gov (United States)

    Bozeman, William C.

    This study explores the relationships between psychological types of users as identified by the Myers-Briggs Type Indicator and factors associated with the implementation and utilization of the Wisconsin System for Instructional Management (WIS-SIM), a computer management information system designed to support management processes in…

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  3. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  4. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...

  5. Perceived Implementation Barriers of a One-to-One Computing Initiative in a Large Urban School District: A Qualitative Approach

    Science.gov (United States)

    Simmons, Brandon; Martin, Florence

    2016-01-01

    One-to-One Computing initiatives are K-12 Educational environments where student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well (Penuel, 2006). One-to-one computing has gained popularity in several schools and school districts across the world. However, there is limited research…

  6. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  13. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  14. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  16. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  17. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  1. Effects of the implementation of the web-based patient support system on staff's attitudes towards computers and IT use: a randomised controlled trial.

    Science.gov (United States)

    Koivunen, Marita; Välimäki, Maritta; Patel, Anita; Knapp, Martin; Hätönen, Heli; Kuosmanen, Lauri; Pitkänen, Anneli; Anttila, Minna; Katajisto, Jouko

    2010-09-01

    Utilisation of information technology (IT) in the treatment of people with severe mental health problems is an unknown area in Europe. Use of IT and guiding patients to relevant sources of health information requires that nursing staff have positive attitudes toward computers and accept IT use as a part of daily practises. The aim of the study was to assess the effects of the implementation of a web-based patient support system on staff's attitudes towards computers and IT use on psychiatric wards. Hundred and forty-nine nurses in two psychiatric hospitals in Finland were randomised to two groups to deliver patient education for patients with schizophrenia and psychosis with a web-based system (n = 76) or leaflets (n = 73). After baseline nurses were followed-up for 18 months after the introduction of the system. The primary outcome was nurses' motivation to utilise computers, and the secondary outcomes were nurses' beliefs in and satisfaction with computers, and use of computer and internet. There were no statistically significant differences between study groups in attitudes towards computers (motivation p = 0.936, beliefs p = 0.270, satisfaction p = 0.462) and internet use (p = 0.276). However, nurses' general computer use (p = 0.029) increased more in the leaflet group than in the IT intervention group. We can conclude that IT has promise as an alternative method in patient education, as the implementation of the web-based patient support system in daily basis did not have a negative effect on nurses' attitudes towards IT. © 2010 The Authors. Journal compilation © 2010 Nordic College of Caring Science.

  2. An A.P.L. micro-programmed machine: implementation on a Multi-20 mini-computer, memory organization, micro-programming and flowcharts

    International Nuclear Information System (INIS)

    Granger, Jean-Louis

    1975-01-01

    This work deals with the presentation of an APL interpreter implemented on an MULTI 20 mini-computer. It includes a left to right syntax analyser, a recursive routine for generation and execution. This routine uses a beating method for array processing. Moreover, during the execution of all APL statements, dynamic memory allocation is used. Execution of basic operations has been micro-programmed. The basic APL interpreter has a length of 10 K bytes. It uses overlay methods. (author) [fr

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  4. Implementation of an electronic medical record system in previously computer-naïve primary care centres: a pilot study from Cyprus.

    Science.gov (United States)

    Samoutis, George; Soteriades, Elpidoforos S; Kounalakis, Dimitris K; Zachariadou, Theodora; Philalithis, Anastasios; Lionis, Christos

    2007-01-01

    The computer-based electronic medical record (EMR) is an essential new technology in health care, contributing to high-quality patient care and efficient patient management. The majority of southern European countries, however, have not yet implemented universal EMR systems and many efforts are still ongoing. We describe the development of an EMR system and its pilot implementation and evaluation in two previously computer-naïve public primary care centres in Cyprus. One urban and one rural primary care centre along with their personnel (physicians and nurses) were selected to participate. Both qualitative and quantitative evaluation tools were used during the implementation phase. Qualitative data analysis was based on the framework approach, whereas quantitative assessment was based on a nine-item questionnaire and EMR usage parameters. Two public primary care centres participated, and a total often health professionals served as EMR system evaluators. Physicians and nurses rated EMR relatively highly, while patients were the most enthusiastic supporters for the new information system. Major implementation impediments were the physicians' perceptions that EMR usage negatively affected their workflow, physicians' legal concerns, lack of incentives, system breakdowns, software design problems, transition difficulties and lack of familiarity with electronic equipment. The importance of combining qualitative and quantitative evaluation tools is highlighted. More efforts are needed for the universal adoption and routine use of EMR in the primary care system of Cyprus as several barriers to adoption exist; however, none is insurmountable. Computerised systems could improve efficiency and quality of care in Cyprus, benefiting the entire population.

  5. SUCCESS OF IMPLEMENTATION OF COMPUTER CRIME ACT (UU ITE NO.11 2008 (A Case Study in the Higher Education Institution in Indonesia

    Directory of Open Access Journals (Sweden)

    Rizki Yudhi Dewantara

    2017-06-01

    Full Text Available Computer crime rate grow rapidly along with the development of the digital world that has touched almost all aspects of human life. Institutions of higher education cannot be separated from the problem of computer crime activities. The paper analyses the implementation of Indonesia Computer Crime Act (UU ITE NO.11 2008 in the Higher Education Institution in Indonesia. It aims to investigate the level of computer crimes that occurred in the higher education institution environment and the act (UU ITE 11, 2008 successfully applied to prevent the crime that would arise. In this research, the analysis using Descriptive Statistics, Binary logistic regression. This paper also describes the success implementation of the Information System Security Policy (ISSP as a computer crime prevention policy in higher education institution in Indonesia. In factor of act, clarity of objectives and purpose of the UU ITE 11, 2008 was low, the communication and socialization activities are still low to the society especially to the higher education institution, moreover the control process has been running on UU ITE 11, 2008, but at a low level. Keywords: computer crime, computer crime act, public policy implementation  ABSTRAK  Kejahatan Komputer berkembang pesat sejalan dengan perkembangan dunia digital, pada institusi perguruan tinggi tidak dapat dipisahkan dari bagian kejahatan computer. Penelitian ini merupakan analisis kesuksesan penerapan undang-undang kejahatan komputer (UU ITE 11, 2008 di institusi perguruan tinggi di Indonesia. Penelitian ini bertujuan untuk mengetahui tingkat kejahatan komputer yang terjadi pada lingkungan institusi perguruan tinggi dan kesuksesan penerapan undang-undang kejahatan komputer untuk mencegah tindakan kejahatan komputer yang mungkin dapat terjadi maupun menangani kejahatan yang sedang terjadi. Berdasarkan tujuan penelitian, digunakan pendekatan quantitative dengan beberapa uji statistic antara lain analisis statistic

  6. Developing computer systems to support emergency operations: Standardization efforts by the Department of Energy and implementation at the DOE Savannah River Site

    International Nuclear Information System (INIS)

    DeBusk, R.E.; Fulton, G.J.; O'Dell, J.J.

    1990-01-01

    This paper describes the development of standards for emergency operations computer systems for the US Department of Energy (DOE). The proposed DOE computer standards prescribe the necessary power and simplicity to meet the expanding needs of emergency managers. Standards include networked UNIX workstations based on the client server model and software that presents information graphically using icons and windowing technology. DOE standards are based on those of the computer industry although Proposed DOE is implementing the latest technology to ensure a solid base for future growth. A case of how these proposed standards are being implemented is also presented. The Savannah River Site (SRS), a DOE facility near Aiken, South Carolina is automating a manual information system, proven over years of development. This system is generalized as a model that can apply to most, if not all, Emergency Operations Centers. This model can provide timely and validated information to emergency managers. By automating this proven system, the system is made easier to use. As experience in the case study demonstrates, computers are only an effective information tool when used as part of a proven process

  7. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    National Research Council Canada - National Science Library

    Arbulu, Timothy D; Vosberg, Brian J

    2007-01-01

    The purpose of this MBA project was to conduct a feasibility study and a cost benefit analysis of using thin-client computer systems instead of traditional networks onboard United States Navy ships...

  8. Designing and Implementing a Computational Methods Course for Upper-level Undergraduates and Postgraduates in Atmospheric and Oceanic Sciences

    Science.gov (United States)

    Nelson, E.; L'Ecuyer, T. S.; Douglas, A.; Hansen, Z.

    2017-12-01

    In the modern computing age, scientists must utilize a wide variety of skills to carry out scientific research. Programming, including a focus on collaborative development, has become more prevalent in both academic and professional career paths. Faculty in the Department of Atmospheric and Oceanic Sciences at the University of Wisconsin—Madison recognized this need and recently approved a new course offering for undergraduates and postgraduates in computational methods that was first held in Spring 2017. Three programming languages were covered in the inaugural course semester and development themes such as modularization, data wrangling, and conceptual code models were woven into all of the sections. In this presentation, we will share successes and challenges in developing a research project-focused computational course that leverages hands-on computer laboratory learning and open-sourced course content. Improvements and changes in future iterations of the course based on the first offering will also be discussed.

  9. Computation of Scattering from Bodies of Revolution Using an Entire-Domain Basis Implementation of the Moment Method

    National Research Council Canada - National Science Library

    Ford, Arthur

    1999-01-01

    Research into improved calibration targets for measurement of radar cross-section has created a need for the ability to accurately compute the scattering from perfectly conducting bodies of revolution...

  10. Proposing Hybrid Architecture to Implement Cloud Computing in Higher Education Institutions Using a Meta-synthesis Appro

    Directory of Open Access Journals (Sweden)

    hamid reza bazi

    2017-12-01

    Full Text Available Cloud computing is a new technology that considerably helps Higher Education Institutions (HEIs to develop and create competitive advantage with inherent characteristics such as flexibility, scalability, accessibility, reliability, fault tolerant and economic efficiency. Due to the numerous advantages of cloud computing, and in order to take advantage of cloud computing infrastructure, services of universities and HEIs need to migrate to the cloud. However, this transition involves many challenges, one of which is lack or shortage of appropriate architecture for migration to the technology. Using a reliable architecture for migration ensures managers to mitigate risks in the cloud computing technology. Therefore, organizations always search for suitable cloud computing architecture. In previous studies, these important features have received less attention and have not been achieved in a comprehensive way. The aim of this study is to use a meta-synthesis method for the first time to analyze the previously published studies and to suggest appropriate hybrid cloud migration architecture (IUHEC. We reviewed many papers from relevant journals and conference proceedings. The concepts extracted from these papers are classified to related categories and sub-categories. Then, we developed our proposed hybrid architecture based on these concepts and categories. The proposed architecture was validated by a panel of experts and Lawshe’s model was used to determine the content validity. Due to its innovative yet user-friendly nature, comprehensiveness, and high security, this architecture can help HEIs have an effective migration to cloud computing environment.

  11. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  12. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    Science.gov (United States)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  13. Implementation of the RS232 communication trainer using computers and the ATMEGA microcontroller for interface engineering Courses

    Science.gov (United States)

    Amelia, Afritha; Julham; Viyata Sundawa, Bakti; Pardede, Morlan; Sutrisno, Wiwinta; Rusdi, Muhammad

    2017-09-01

    RS232 of serial communication is the communication system in the computer and microcontroller. This communication was studied in Department of Electrical Engineering and Department of Computer Engineering and Informatics Department at Politeknik Negeri Medan. Recently, an application of simulation was installed on the computer which used for teaching and learning process. The drawback of this system is not useful for communication method between learner and trainer. Therefore, this study was created method of 10 stage to which divided into 7 stages and 3 major phases. It can be namely the analysis of potential problems and data collection, trainer design, and empirical testing and revision. After that, the trainer and module were tested in order to get feedback from the learner. The result showed that 70.10% of feedback which wide reasonable from the learner of questionnaire.

  14. Implementation of Web-Based Education in Egypt through Cloud Computing Technologies and Its Effect on Higher Education

    Science.gov (United States)

    El-Seoud, M. Samir Abou; El-Sofany, Hosam F.; Taj-Eddin, Islam A. T. F.; Nosseir, Ann; El-Khouly, Mahmoud M.

    2013-01-01

    The information technology educational programs at most universities in Egypt face many obstacles that can be overcome using technology enhanced learning. An open source Moodle eLearning platform has been implemented at many public and private universities in Egypt, as an aid to deliver e-content and to provide the institution with various…

  15. Using Innovative Tools to Teach Computer Application to Business Students--A Hawthorne Effect or Successful Implementation Here to Stay

    Science.gov (United States)

    Khan, Zeenath Reza

    2014-01-01

    A year after the primary study that tested the impact of introducing blended learning and guided discovery to help teach computer application to business students, this paper looks into the continued success of using guided discovery and blended learning with learning management system in and out of classrooms to enhance student learning.…

  16. The evaluation of a national research plan to support the implementation of computers in education in The Netherlands (ED 310737)

    NARCIS (Netherlands)

    Moonen, J.C.M.M.; Collis, Betty; Koster, Klaas

    1990-01-01

    This paper describes the evolution of a national research plan for computers and education, an approach which was initiated in the Netherlands in 1983. Two phases can be recognized in the Dutch experience: one from 1984 until 1988 and one from 1989 until 1992. Building upon the experiences of the

  17. Development, Implementation, and Outcomes of an Equitable Computer Science After-School Program: Findings from Middle-School Students

    Science.gov (United States)

    Mouza, Chrystalla; Marzocchi, Alison; Pan, Yi-Cheng; Pollock, Lori

    2016-01-01

    Current policy efforts that seek to improve learning in science, technology, engineering, and mathematics (STEM) emphasize the importance of helping all students acquire concepts and tools from computer science that help them analyze and develop solutions to everyday problems. These goals have been generally described in the literature under the…

  18. Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education

    Science.gov (United States)

    Zhamanov, Azamat; Yoo, Seong-Moo; Sakhiyeva, Zhulduz; Zhaparov, Meirambek

    2018-01-01

    Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods:…

  19. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  20. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  1. Putting all that (HEP-) data to work - a REAL implementation of an unlimited computing and storage architecture

    International Nuclear Information System (INIS)

    Ernst, Michael

    1996-01-01

    Since computing in HEP left the Mainframe-Path, many institutions demonstrated a successful migration to workstation-based computing, especially for applications requiring a high CPU-to-I/O ratio. However, the difficulties and the complexity starts beyond just providing CPU-Cycles. Critical applications, requiring either sequential access to large amounts of data or to many small sets out of a multi 10-Terabyte Data Repository need technical approaches we have not had so far. Though we felt that we were hardly able to follow technology evolving in the various fields, we recently had to realize that even politics overtook technical evolution - at least in the areas mentioned above. The USA is making peace with Russia. DEC is talking to IBM, SGI communicating with HP. All these things became true, and through, unfortunately, the Cold War lasted 50 years, and-in a relative sense-we were afraid that 50 years seemed to be how long any self respecting high performance computer (or a set of workstations) had to wait for data from its Server, fortunately, we are now facing a similar progress of friendliness, harmony and balance in the former problematic (computing) areas. Buzzwords, mentioned many thousand times in talks describing today's and future requirements, including Functionality, Reliability, Scalability, Modularity and Portability are not just phrases, wishes and dreams any longer. At DESY, we are in the process of demonstrating an architecture that is taking those five issues equally into consideration, including Heterogeneous Computing Platforms with ultimate file system approaches, Heterogeneous Mass Storage Devices and an Open Distributed Hierarchical Mass Storage Management System. This contribution will provide an overview on how far we got and what the next steps will be. (author)

  2. Implementing the flipped classroom methodology to the subject "Applied computing" of the chemical engineering degree at the University of Barcelona

    Directory of Open Access Journals (Sweden)

    Montserrat Iborra

    2017-06-01

    Full Text Available This work is focus on implementation, development, documentation, analysis and assessment of flipped classroom methodology, by means of just in time teaching strategy, in a pilot group (1 of 6 of the subject “Applied Computing” of Chemical Engineering Undergraduate Degree of the University of Barcelona. The results show that this technique promotes self-learning, autonomy, time management as well as an increase in the effectiveness of classroom hours.

  3. Costs associated with implementation of computer-assisted clinical decision support system for antenatal and delivery care: case study of Kassena-Nankana district of northern Ghana.

    Science.gov (United States)

    Dalaba, Maxwell Ayindenaba; Akweongo, Patricia; Williams, John; Saronga, Happiness Pius; Tonchev, Pencho; Sauerborn, Rainer; Mensah, Nathan; Blank, Antje; Kaltschmidt, Jens; Loukanova, Svetla

    2014-01-01

    This study analyzed cost of implementing computer-assisted Clinical Decision Support System (CDSS) in selected health care centres in Ghana. A descriptive cross sectional study was conducted in the Kassena-Nankana district (KND). CDSS was deployed in selected health centres in KND as an intervention to manage patients attending antenatal clinics and the labour ward. The CDSS users were mainly nurses who were trained. Activities and associated costs involved in the implementation of CDSS (pre-intervention and intervention) were collected for the period between 2009-2013 from the provider perspective. The ingredients approach was used for the cost analysis. Costs were grouped into personnel, trainings, overheads (recurrent costs) and equipment costs (capital cost). We calculated cost without annualizing capital cost to represent financial cost and cost with annualizing capital costs to represent economic cost. Twenty-two trained CDSS users (at least 2 users per health centre) participated in the study. Between April 2012 and March 2013, users managed 5,595 antenatal clients and 872 labour clients using the CDSS. We observed a decrease in the proportion of complications during delivery (pre-intervention 10.74% versus post-intervention 9.64%) and a reduction in the number of maternal deaths (pre-intervention 4 deaths versus post-intervention 1 death). The overall financial cost of CDSS implementation was US$23,316, approximately US$1,060 per CDSS user trained. Of the total cost of implementation, 48% (US$11,272) was pre-intervention cost and intervention cost was 52% (US$12,044). Equipment costs accounted for the largest proportion of financial cost: 34% (US$7,917). When economic cost was considered, total cost of implementation was US$17,128-lower than the financial cost by 26.5%. The study provides useful information in the implementation of CDSS at health facilities to enhance health workers' adherence to practice guidelines and taking accurate decisions to improve

  4. A schema for knowledge representation and its implementation in a computer-aided design and manufacturing system

    Energy Technology Data Exchange (ETDEWEB)

    Tamir, D.E.

    1989-01-01

    Modularity in the design and implementation of expert systems relies upon cooperation among the expert systems and communication of knowledge between them. A prerequisite for an effective modular approach is some standard for knowledge representation to be used by the developers of the different modules. In this work the author presents a schema for knowledge representation, and apply this schema in the design of a rule-based expert system. He also implements a cooperative expert system using the proposed knowledge representation method. A knowledge representation schema is a formal specification of the internal, conceptual, and external components of a knowledge base, each specified in a separate schema. The internal schema defines the structure of a knowledge base, the conceptual schema defines the concepts, and the external schema formalizes the pragmatics of a knowledge base. The schema is the basis for standardizing knowledge representation systems and it is used in the various phases of design and specification of the knowledge base. A new model of knowledge representation based on a pattern recognition interpretation of implications is developed. This model implements the concept of linguistic variables and can, therefore, emulate human reasoning with linguistic imprecision. The test case for the proposed schema of knowledge representation is a system is a cooperative expert system composed of two expert systems. This system applies a pattern recognition interpretation of a generalized one-variable implication with linguistic variables.

  5. Generating randomised virtualised scenarios for ethical hacking and computer security education: SecGen implementation and deployment

    OpenAIRE

    Schreuders, ZC; Ardern, L

    2015-01-01

    Computer security students benefit from having hands-on experience with hacking tools and with access to vulnerable systems that they can attack and defend. However, vulnerable VMs are static; once they have been exploited by a student there is no repeatable challenge as the vulnerable boxes never change. A new novel solution, SecGen, has been created and deployed. SecGen solves the issue by creating vulnerable machines with randomised vulnerabilities and services, with constraints that ensur...

  6. Dispersed flow film boiling: An investigation of the possibility to improve the models implemented in the NRC computer codes for the reflooding phase of the LOCA

    International Nuclear Information System (INIS)

    Andreani, M.; Yadigaroglu, G.; Paul Scherrer Inst.

    1992-08-01

    Dispersed Flow Film Boiling is the heat transfer regime that occurs at high void fractions in a heated channel. The way this heat transfer mode is modelled in the NRC computer codes (RELAP5 and TRAC) and the validity of the assumptions and empirical correlations used is discussed. An extensive review of the theoretical and experimental work related with heat transfer to highly dispersed mixtures reveals the basic deficiencies of these models: the investigation refers mostly to the typical conditions of low rate bottom reflooding, since the simulation of this physical situation by the computer codes has often showed poor results. The alternative models that are available in the literature are reviewed, and their merits and limits are highlighted. The modifications that could improve the physics of the models implemented in the codes are identified

  7. Development of point Kernel radiation shielding analysis computer program implementing recent nuclear data and graphic user interfaces

    International Nuclear Information System (INIS)

    Kang, S.; Lee, S.; Chung, C.

    2002-01-01

    There is an increasing demand for safe and efficient use of radiation and radioactive work activity along with shielding analysis as a result the number of nuclear and conventional facilities using radiation or radioisotope rises. Most Korean industries and research institutes including Korea Power Engineering Company (KOPEC) have been using foreign computer programs for radiation shielding analysis. Korean nuclear regulations have introduced new laws regarding the dose limits and radiological guides as prescribed in the ICRP 60. Thus, the radiation facilities should be designed and operated to comply with these new regulations. In addition, the previous point kernel shielding computer code utilizes antiquated nuclear data (mass attenuation coefficient, buildup factor, etc) which were developed in 1950∼1960. Subsequently, the various nuclear data such mass attenuation coefficient, buildup factor, etc. have been updated during the past few decades. KOPEC's strategic directive is to become a self-sufficient and independent nuclear design technology company, thus KOPEC decided to develop a new radiation shielding computer program that included the latest regulatory requirements and updated nuclear data. This new code was designed by KOPEC with developmental cooperation with Hanyang University, Department of Nuclear Engineering. VisualShield is designed with a graphical user interface to allow even users unfamiliar to radiation shielding theory to proficiently prepare input data sets and analyzing output results

  8. RELAP4/MOD5: a computer program for transient thermal-hydraulic analysis of nuclear reactors and related systems. User's manual. Volume II. Program implementation

    International Nuclear Information System (INIS)

    1976-06-01

    A discussion is presented of the use of the RELAP4/MOD5 computer program in simulating the thermal-hydraulic behavior of light-water reactor systems when subjected to postulated transients such as a LOCA, pump failure, or nuclear excursion. The volume is divided into main sections which cover: (1) program description, (2) input data, (3) problem initialization, (4) user guidelines, (5) output discussion, (6) source program description, (7) implementation requirements, (8) data files, (9) description of PLOTR4M, (10) description of STH20, (11) summary flowchart, (12) sample problems, (13) problem definition, and (14) problem input

  9. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  10. Implementation of a cell-wise block-Gauss-Seidel iterative method for SN transport on a hybrid parallel computer architecture

    International Nuclear Information System (INIS)

    Rosa, Massimiliano; Warsa, James S.; Perks, Michael

    2011-01-01

    We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S_n transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine™ (Cell/B.E.)"1. LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S_n angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S_n transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems. (author)

  11. Developing Proper Systems for Successful Cloud Computing Implementation Using Fuzzy ARAS Method (Case Study: University of Tehran Faculty of New Science and Technology

    Directory of Open Access Journals (Sweden)

    Jalil Heidaryd Dahooie

    2017-12-01

    Full Text Available Given the increasing requirements of communication and the need for advanced network-based technologies, cloud computing has been suggested as a perfect strategy to achieve these objectives. Yet, despite the development of computing applications and the increased number of alternatives, it is quite a difficult task to select the exact software platform for the implementation of cloud computing arrangements. In this line, the present paper aimed to develop a scientific framework as how to select the proper software for successful cloud computing implantation at the infrastructure level. First through a review on the related literature and using experts’ opinions, the software selection criteria were extracted. Based on the framework proposed here, the interval-valued fuzzy ARAS method was then employed for weighting and prioritizing specified alternatives. This model was applied by the Faculty of New Sciences and Technologies of Tehran University in order to select proper software platforms from among five alternatives. The results revealed that the OpenStack cloud operating system has been selected as the best alternative, most probably because this platform demonstrates significant achievement for its merits such as high level of performance, reliability and security, stability, and usability.

  12. Computational toxicology as implemented by the U.S. EPA: providing high throughput decision support tools for screening and assessing chemical exposure, hazard and risk.

    Science.gov (United States)

    Kavlock, Robert; Dix, David

    2010-02-01

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly

  13. Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations.

    Science.gov (United States)

    Genewein, Tim; Braun, Daniel A

    2016-06-01

    Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.

  14. User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

    CERN Document Server

    Wiley, R A

    1977-01-01

    User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

  15. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    Science.gov (United States)

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean

  16. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    International Nuclear Information System (INIS)

    Kim, Sangroh; Yoshizumi, Terry T; Yin Fangfang; Chetty, Indrin J

    2013-01-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  17. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral

  18. FORMALIZATION OF THE ACCOUNTING VALUABLE MEMES METHOD FOR THE PORTFOLIO OF ORGANIZATION DEVELOPMENT AND INFORMATION COMPUTER TOOLS FOR ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Serhii D. Bushuiev

    2017-12-01

    Full Text Available The current state of project management has been steadily demonstrating a trend toward increasing the role of flexible "soft" management practices. A method for preparing solutions for the formation of a value-oriented portfolio based on a comparison of the level of internal organizational values is proposed. The method formalizes the methodological foundations of value-oriented portfolio management in the development of organizations in the form of approaches, basic terms and technological methods with ICT using, which makes it possible to use them as an integral knowledge system for creating an automated system for managing portfolios of organizations. The result of the study is the deepening of the theoretical provisions for managing the development of organizations through the implementation of a value-oriented portfolio of projects, which allowed formalize the method of recording value memes in the development portfolios of organizations, to disclose its logic, essence, objective basis and rules.

  19. [The implementation of computer model in research of dynamics of proliferation of cells of thyroid gland follicle].

    Science.gov (United States)

    Abduvaliev, A A; Gil'dieva, M S; Khidirov, B N; Saĭdalieva, M; Khasanov, A A; Musaeva, Sh N; Saatov, T S

    2012-04-01

    The article deals with the results of computational experiments in research of dynamics of proliferation of cells of thyroid gland follicle in normal condition and in the case of malignant neoplasm. The model studies demonstrated that the chronic increase of parameter of proliferation of cells of thyroid gland follicle results in abnormal behavior of numbers of cell cenosis of thyroid gland follicle. The stationary state interrupts, the auto-oscillations occur with transition to irregular oscillations with unpredictable cell proliferation and further to the "black hole" effect. It is demonstrated that the present medical biologic experimental data and theory propositions concerning the structural functional organization of thyroid gland on cell level permit to develop mathematical models for quantitative analysis of numbers of cell cenosis of thyroid gland follicle in normal conditions. The technique of modeling of regulative mechanisms of living systems and equations of cell cenosis regulations was used

  20. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: How we do it

    International Nuclear Information System (INIS)

    Schlorhaufer, C.; Behrends, M.; Diekhaus, G.; Keberle, M.; Weidemann, J.

    2012-01-01

    Purpose: Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Materials and methods: Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. Results: The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Conclusions: Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools.

  1. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    Science.gov (United States)

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  2. Clinical implementation of an emergency department coronary computed tomographic angiography protocol for triage of patients with suspected acute coronary syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Ghoshhajra, Brian B.; Staziaki, Pedro V.; Vadvala, Harshna; Kim, Phillip; Meyersohn, Nandini M.; Janjua, Sumbal A.; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Takx, Richard A.P. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Neilan, Tomas G.; Francis, Sanjeev [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Bittner, Daniel [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nuernberg (FAU), Department of Medicine 2 - Cardiology, Erlangen (Germany); Mayrhofer, Thomas [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Stralsund University of Applied Sciences, School of Business Studies, Stralsund (Germany); Greenwald, Jeffrey L. [Massachusetts General Hospital and Harvard Medical School, Department of Medicine, Boston, MA (United States); Truong, Quyhn A. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Weill Cornell College of Medicine, Department of Radiology, New York, NY (United States); Abbara, Suhny [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); UT Southwestern Medical Center, Department Cardiothoracic Imaging, Dallas, TX (United States); Brown, David F.M.; Nagurney, John T. [Massachusetts General Hospital and Harvard Medical School, Department of Emergency Medicine, Boston, MA (United States); Januzzi, James L. [Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Collaboration: MGH Emergency Cardiac CTA Program Contributors

    2017-07-15

    To evaluate the efficiency and safety of emergency department (ED) coronary computed tomography angiography (CTA) during a 3-year clinical experience. Single-center registry of coronary CTA in consecutive ED patients with suspicion of acute coronary syndrome (ACS). The primary outcome was efficiency of coronary CTA defined as the length of hospitalization. Secondary endpoints of safety were defined as the rate of downstream testing, normalcy rates of invasive coronary angiography (ICA), absence of missed ACS, and major adverse cardiac events (MACE) during follow-up, and index radiation exposure. One thousand twenty two consecutive patients were referred for clinical coronary CTA with suspicion of ACS. Overall, median time to discharge home was 10.5 (5.7-24.1) hours. Patient disposition was 42.7 % direct discharge from the ED, 43.2 % discharge from emergency unit, and 14.1 % hospital admission. ACS rate during index hospitalization was 9.1 %. One hundred ninety two patients underwent additional diagnostic imaging and 77 underwent ICA. The positive predictive value of CTA compared to ICA was 78.9 % (95 %-CI 68.1-87.5 %). Median CT radiation exposure was 4.0 (2.5-5.8) mSv. No ACS was missed; MACE at follow-up after negative CTA was 0.2 %. Coronary CTA in an experienced tertiary care setting allows for efficient and safe management of patients with suspicion for ACS. (orig.)

  3. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: How we do it

    Energy Technology Data Exchange (ETDEWEB)

    Schlorhaufer, C., E-mail: Schlorhaufer.Celia@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Behrends, M., E-mail: behrends.marianne@mh-hannover.de [Peter L. Reichertz Department of Medical Informatics, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Diekhaus, G., E-mail: Diekhaus.Gesche@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Keberle, M., E-mail: m.keberle@bk-paderborn.de [Department of Diagnostic and Interventional Radiology, Brüderkrankenhaus St. Josef Paderborn, Husener Str. 46, 33098 Paderborn (Germany); Weidemann, J., E-mail: Weidemann.Juergen@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany)

    2012-12-15

    Purpose: Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Materials and methods: Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. Results: The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Conclusions: Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools.

  4. Experiences with establishing and implementing learning management system and computer-based test system in medical college.

    Science.gov (United States)

    Park, Joo Hyun; Son, Ji Young; Kim, Sun

    2012-09-01

    The purpose of this study was to establish an e-learning system to support learning in medical education and identify solutions for improving the system. A learning management system (LMS) and computer-based test (CBT) system were established to support e-learning for medical students. A survey of 219 first- and second-grade medical students was administered. The questionnaire included 9 forced choice questions about the usability of system and 2 open-ended questions about necessary improvements to the system. The LMS consisted of a class management, class evaluation, and class attendance system. CBT consisted of a test management, item bank, and authoring tool system. The results of the survey showed a high level of satisfaction in all system usability items except for stability. Further, the advantages of the e-learning system were ensuring information accessibility, providing constant feedback, and designing an intuitive interface. Necessary improvements to the system were stability, user control, readability, and diverse device usage. Based on the findings, suggestions for developing an e-learning system to improve usability by medical students and support learning effectively are recommended.

  5. MODEL OF THE IMPLEMENTATION PROCESS OF DESIGNING A CLOUD-BASED LEARNING ENVIRONMENT FOR THE PREPARATION OF BACHELOR OF COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    Vakaliuk T.

    2017-12-01

    Full Text Available The article presents the model of the process of implementation of the design of a cloud-oriented learning environment (CBLE for the preparation of bachelor of computer science, which consists of seven stages: analysis, setting goals and objectives, formulating requirements for the cloud-oriented learning environment, modeling the CBLE, developing CBLE, using CBLE in the educational Bachelor of Computer Science and Performance Testing. Each stage contains sub-steps. The analysis stage is considered in three aspects: psychological, pedagogical and technological. The formulation of the requirements for the CBLE was carried out taking into account the content and objectives of the training; experience of using CBLE; the personal qualities and knowledge, skills and abilities of students. The simulation phase was divided into sub-stages: the development of a structural and functional model of the CBLE for the preparation of bachelors of computer science; development of a model of cloud-oriented learning support system (COLSS; development of a model of interaction processes in CBLE. The fifth stage was also divided into the following sub-steps: domain registration and customization of the appearance of COLSS; definition of the disciplines provided by the curriculum preparation of bachelors of computer science; creation of own cabinets of teachers and students; download educational and methodological and accompanying materials; the choice of traditional and cloud-oriented forms, methods, means of training. The verification of the functioning of the CBLE will be carried out in the following areas: the functioning of the CBLE; results of students' educational activity; formation of information and communication competence of students.

  6. Costs associated with implementation of computer-assisted clinical decision support system for antenatal and delivery care: case study of Kassena-Nankana district of northern Ghana.

    Directory of Open Access Journals (Sweden)

    Maxwell Ayindenaba Dalaba

    Full Text Available This study analyzed cost of implementing computer-assisted Clinical Decision Support System (CDSS in selected health care centres in Ghana.A descriptive cross sectional study was conducted in the Kassena-Nankana district (KND. CDSS was deployed in selected health centres in KND as an intervention to manage patients attending antenatal clinics and the labour ward. The CDSS users were mainly nurses who were trained. Activities and associated costs involved in the implementation of CDSS (pre-intervention and intervention were collected for the period between 2009-2013 from the provider perspective. The ingredients approach was used for the cost analysis. Costs were grouped into personnel, trainings, overheads (recurrent costs and equipment costs (capital cost. We calculated cost without annualizing capital cost to represent financial cost and cost with annualizing capital costs to represent economic cost.Twenty-two trained CDSS users (at least 2 users per health centre participated in the study. Between April 2012 and March 2013, users managed 5,595 antenatal clients and 872 labour clients using the CDSS. We observed a decrease in the proportion of complications during delivery (pre-intervention 10.74% versus post-intervention 9.64% and a reduction in the number of maternal deaths (pre-intervention 4 deaths versus post-intervention 1 death. The overall financial cost of CDSS implementation was US$23,316, approximately US$1,060 per CDSS user trained. Of the total cost of implementation, 48% (US$11,272 was pre-intervention cost and intervention cost was 52% (US$12,044. Equipment costs accounted for the largest proportion of financial cost: 34% (US$7,917. When economic cost was considered, total cost of implementation was US$17,128-lower than the financial cost by 26.5%.The study provides useful information in the implementation of CDSS at health facilities to enhance health workers' adherence to practice guidelines and taking accurate decisions to

  7. Fish and chips: implementation of a neural network model into computer chips to maximize swimming efficiency in autonomous underwater vehicles.

    Science.gov (United States)

    Blake, R W; Ng, H; Chan, K H S; Li, J

    2008-09-01

    Recent developments in the design and propulsion of biomimetic autonomous underwater vehicles (AUVs) have focused on boxfish as models (e.g. Deng and Avadhanula 2005 Biomimetic micro underwater vehicle with oscillating fin propulsion: system design and force measurement Proc. 2005 IEEE Int. Conf. Robot. Auto. (Barcelona, Spain) pp 3312-7). Whilst such vehicles have many potential advantages in operating in complex environments (e.g. high manoeuvrability and stability), limited battery life and payload capacity are likely functional disadvantages. Boxfish employ undulatory median and paired fins during routine swimming which are characterized by high hydromechanical Froude efficiencies (approximately 0.9) at low forward speeds. Current boxfish-inspired vehicles are propelled by a low aspect ratio, 'plate-like' caudal fin (ostraciiform tail) which can be shown to operate at a relatively low maximum Froude efficiency (approximately 0.5) and is mainly employed as a rudder for steering and in rapid swimming bouts (e.g. escape responses). Given this and the fact that bioinspired engineering designs are not obligated to wholly duplicate a biological model, computer chips were developed using a multilayer perception neural network model of undulatory fin propulsion in the knifefish Xenomystus nigri that would potentially allow an AUV to achieve high optimum values of propulsive efficiency at any given forward velocity, giving a minimum energy drain on the battery. We envisage that externally monitored information on flow velocity (sensory system) would be conveyed to the chips residing in the vehicle's control unit, which in turn would signal the locomotor unit to adopt kinematics (e.g. fin frequency, amplitude) associated with optimal propulsion efficiency. Power savings could protract vehicle operational life and/or provide more power to other functions (e.g. communications).

  8. Implementation of the thermal-hydraulic transient analysis code RELAP4/MOD5 and MOD6 on the FACOM 230/75 computer system

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Ishigai, Takahiro; Kumakura, Toshimasa; Naraoka, Ken-itsu

    1979-03-01

    Development efforts have continued on the extensively used LOCA analysis code RELAP-4, as seen in its history; that is, from the prototype version MOD2 to the latest one MOD6 which is capable of one-through calculations from blowdown to reflood phase of PWR-LOCA. Many improvements and refinements of the models have enlarged the scopes and extents of phenomena to treat. Correspondingly the size of program has increased version to version, and special programming techniques have continuously been introduced to manage the program within limited capacity of core memory. For example, the Dynamic Storage Allocation of MOD5 and the PRELOAD Preprocessor newly incorporated in MOD6 are those designed for the CDC computer with relatively small core size. Described are these programming techniques in detail and experiences on implementation of the codes on FACOM 230/75, together with some results of confirmatory calculations. (author)

  9. The implementation of the CDC version of RELAP5/MOD1/019 on an IBM compatible computer system (AMDAHL 470/V8)

    International Nuclear Information System (INIS)

    Kolar, W.; Brewka, W.

    1984-01-01

    RELAP5/MOD1 is an advanced one-dimensional best estimate system code, which is used for safety analysis studies of nuclear pressurized water reactor systems and related integral and separate effect test facilities. The program predicts the system response for large break, small break LOCA and special transients. To a large extent RELAP5/MOD1 is written in Fortran, only a small part of the program is coded in CDC assembler. RELAP5/MOD1 was developed on the CDC CYBER 176 at INEL*. The code development team made use of CDC system programs like the CDC UPDATE facility and incorporated in the program special purpose software packages. The report describes the problems which have been encountered when implementing the CDC version of RELAP5/MOD1 on an IBM compatible computer systems (AMDAHL 470/V8)

  10. Reflections on the Implementation of Low-Dose Computed Tomography Screening in Individuals at High Risk of Lung Cancer in Spain.

    Science.gov (United States)

    Garrido, Pilar; Sánchez, Marcelo; Belda Sanchis, José; Moreno Mata, Nicolás; Artal, Ángel; Gayete, Ángel; Matilla González, José María; Galbis Caravajal, José Marcelo; Isla, Dolores; Paz-Ares, Luis; Seijo, Luis M

    2017-10-01

    Lung cancer (LC) is a major public health issue. Despite recent advances in treatment, primary prevention and early diagnosis are key to reducing the incidence and mortality of this disease. A recent clinical trial demonstrated the efficacy of selective screening by low-dose computed tomography (LDCT) in reducing the risk of both lung cancer mortality and all-cause mortality in high-risk individuals. This article contains the reflections of an expert group on the use of LDCT for early diagnosis of LC in high-risk individuals, and how to evaluate its implementation in Spain. The expert group was set up by the Spanish Society of Pulmonology and Thoracic Surgery (SEPAR), the Spanish Society of Thoracic Surgery (SECT), the Spanish Society of Radiology (SERAM) and the Spanish Society of Medical Oncology (SEOM). Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Using an adaptive expertise lens to understand the quality of teachers' classroom implementation of computer-supported complex systems curricula in high school science

    Science.gov (United States)

    Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric

    2015-05-01

    Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.

  12. Comparative yield of positive brain Computed Tomography after implementing the NICE or SIGN head injury guidelines in two equivalent urban populations

    International Nuclear Information System (INIS)

    Summerfield, R.; Macduff, R.; Davis, R.; Sambrook, M.; Britton, I.

    2011-01-01

    Aims: To compare the yield of positive computed tomography (CT) brain examinations after the implementation of the National Institute for Clinical Excellence (NICE) or the Scottish Intercollegiate Guidance Network (SIGN) guidelines, in comparable urban populations in two teaching hospitals in England and Scotland. Materials and methods: Four hundred consecutive patients presenting at each location following a head injury who underwent a CT examination of the head according to the locally implemented guidelines were compared. Similar matched populations were compared for indication and yield. Yield was measured according to (1) positive CT findings of the sequelae of trauma and (2) intervention required with anaesthetic or intensive care unit (ICU) support, or neurosurgery. Results: The mean ages of patients at the English and Scottish centres were 49.9 and 49.2 years, respectively. Sex distribution was 64.1% male and 66.4% male respectively. Comparative yield was 23.8 and 26.5% for positive brain scans, 3 and 2.75% for anaesthetic support, and 3.75 and 2.5% for neurosurgical intervention. Glasgow Coma Score (GCS) 10% yield of positive scans. The choice of guideline to follow should be at the discretion of the local institution. The indications GCS <13 and clinical or radiological evidence of a skull fracture are highly predictive of intracranial pathology, and their presence should be an absolute indicator for fast-tracking the management of the patient.

  13. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    Directory of Open Access Journals (Sweden)

    Shi-Yi Chen

    Full Text Available Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i genetic diversity of DNA sequences, (ii statistical tests for neutral evolution, and (iii measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  14. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    Science.gov (United States)

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  15. GPU-based implementation of an accelerated SR-NLUT based on N-point one-dimensional sub-principal fringe patterns in computer-generated holograms

    Directory of Open Access Journals (Sweden)

    Hee-Min Choi

    2015-06-01

    Full Text Available An accelerated spatial redundancy-based novel-look-up-table (A-SR-NLUT method based on a new concept of the N-point one-dimensional sub-principal fringe pattern (N-point1-D sub-PFP is implemented on a graphics processing unit (GPU for fast calculation of computer-generated holograms (CGHs of three-dimensional (3-Dobjects. Since the proposed method can generate the N-point two-dimensional (2-D PFPs for CGH calculation from the pre-stored N-point 1-D PFPs, the loading time of the N-point PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confirm that the average calculation time for one-object point has been reduced by 49.6% and 55.4% compared to those of the conventional 2-D SR-NLUT methods for each case of the 2-point and 3-point SR maps, respectively.

  16. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 2: Computational implementation and first results

    Directory of Open Access Journals (Sweden)

    L. Peruzza

    2017-11-01

    Full Text Available This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017 and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude–scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014. Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M  >  6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M  <  6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered

  17. Comparative yield of positive brain Computed Tomography after implementing the NICE or SIGN head injury guidelines in two equivalent urban populations

    Energy Technology Data Exchange (ETDEWEB)

    Summerfield, R., E-mail: ruth.summerfield@uhns.nhs.u [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom); Macduff, R. [Glasgow Royal Infirmary, 84 Castle Street, Glasgow G4 0SF (United Kingdom); Davis, R. [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom); Sambrook, M. [Glasgow Royal Infirmary, 84 Castle Street, Glasgow G4 0SF (United Kingdom); Britton, I. [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom)

    2011-04-15

    Aims: To compare the yield of positive computed tomography (CT) brain examinations after the implementation of the National Institute for Clinical Excellence (NICE) or the Scottish Intercollegiate Guidance Network (SIGN) guidelines, in comparable urban populations in two teaching hospitals in England and Scotland. Materials and methods: Four hundred consecutive patients presenting at each location following a head injury who underwent a CT examination of the head according to the locally implemented guidelines were compared. Similar matched populations were compared for indication and yield. Yield was measured according to (1) positive CT findings of the sequelae of trauma and (2) intervention required with anaesthetic or intensive care unit (ICU) support, or neurosurgery. Results: The mean ages of patients at the English and Scottish centres were 49.9 and 49.2 years, respectively. Sex distribution was 64.1% male and 66.4% male respectively. Comparative yield was 23.8 and 26.5% for positive brain scans, 3 and 2.75% for anaesthetic support, and 3.75 and 2.5% for neurosurgical intervention. Glasgow Coma Score (GCS) <13 (NICE) and GCS {<=}12 and radiological or clinical evidence of skull fracture (SIGN) demonstrated the greatest statistical association with a positive CT examination. Conclusion: In a teaching hospital setting, there is no significant difference in the yield between the NICE and SIGN guidelines. Both meet the SIGN standard of >10% yield of positive scans. The choice of guideline to follow should be at the discretion of the local institution. The indications GCS <13 and clinical or radiological evidence of a skull fracture are highly predictive of intracranial pathology, and their presence should be an absolute indicator for fast-tracking the management of the patient.

  18. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  19. Staff experiences within the implementation of computer-based nursing records in residential aged care facilities: a systematic review and synthesis of qualitative research.

    Science.gov (United States)

    Meißner, Anne; Schnepp, Wilfried

    2014-06-20

    Since the introduction of electronic nursing documentation systems, its implementation in recent years has increased rapidly in Germany. The objectives of such systems are to save time, to improve information handling and to improve quality. To integrate IT in the daily working processes, the employee is the pivotal element. Therefore it is important to understand nurses' experience with IT implementation. At present the literature shows a lack of understanding exploring staff experiences within the implementation process. A systematic review and meta-ethnographic synthesis of primary studies using qualitative methods was conducted in PubMed, CINAHL, and Cochrane. It adheres to the principles of the PRISMA statement. The studies were original, peer-reviewed articles from 2000 to 2013, focusing on computer-based nursing documentation in Residential Aged Care Facilities. The use of IT requires a different form of information processing. Some experience this new form of information processing as a benefit while others do not. The latter find it more difficult to enter data and this result in poor clinical documentation. Improvement in the quality of residents' records leads to an overall improvement in the quality of care. However, if the quality of those records is poor, some residents do not receive the necessary care. Furthermore, the length of time necessary to complete the documentation is a prominent theme within that process. Those who are more efficient with the electronic documentation demonstrate improved time management. For those who are less efficient with electronic documentation the information processing is perceived as time consuming. Normally, it is possible to experience benefits when using IT, but this depends on either promoting or hindering factors, e.g. ease of use and ability to use it, equipment availability and technical functionality, as well as attitude. In summary, the findings showed that members of staff experience IT as a benefit when

  20. Computer Games as a Tool for Implementation of Memory Policy (on the Example of Displaying Events of The Great Patriotic War in Video Games

    Directory of Open Access Journals (Sweden)

    Сергей Игоревич Белов

    2018-12-01

    Full Text Available The presented work is devoted to the study of the practice of using computer games as a tool of the memory policy. The relevance of this study determines both the growth of the importance of video games as a means of forming ideas about the events of the past, and a low degree of study of this topic. As the goal of the author's research, the research is to identify the prospects for using computer games as an instrument for implementing the memory policy within the framework of the case of the events of the Great Patriotic War. The empirical base of the work was formed due to the generalization of the content of such video games as “Call of Duty 1”, “Call of Duty 14: WWII”, “Company of Heroes 2” and “Commandos 3: Destination Berlin”. The methodological base of the research is formed due to the involvement of elements of descriptive political analysis, the theory of operant conditioning B.F. Skinner and the concept of social identity H. Tajfel and J. Turner. The author comes to the conclusion that familiarization of users with the designated games contributes to the consolidation in the minds of users of negative stereotypes regarding the participation of the Red Army in the Great Patriotic War. The process of integration of negative images is carried out using the methods of operant conditioning. The integration of the system of negative images into the mass consciousness of the inhabitants of the post-Soviet space makes it difficult to preserve the remnants of Soviet political symbols and elements constructed on their basis identity. The author puts forward the hypothesis that in the case of complete desovietization of the public policy space in the states that emerged as a result of the collapse of the USSR, the task of revising the history of the Great Patriotic War will be greatly facilitated, and with the subsequent departure from the life of the last eyewitnesses of the relevant events, achieving this goal will be only a

  1. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  2. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Abel, R.

    2000-01-01

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  3. Comparison of Computer Based Instruction to Behavior Skills Training for Teaching Staff Implementation of Discrete-Trial Instruction with an Adult with Autism

    Science.gov (United States)

    Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…

  4. Quantum walk computation

    International Nuclear Information System (INIS)

    Kendon, Viv

    2014-01-01

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer

  5. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  6. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...

  7. RELAP4/MOD5: a computer program for transient thermal-hydraulic analysis of nuclear reactors and related systems. User's manual. Volume II. Program implementation

    International Nuclear Information System (INIS)

    1976-09-01

    This portion of the RELAP4/MOD5 User's Manual presents the details of setting up and entering the reactor model to be evaluated. The input card format and arrangement is presented in depth, including not only cards for data but also those for editing and restarting. Problem initalization including pressure distribution and energy balance is discussed. A section entitled ''User Guidelines'' is included to provide modeling recommendations, analysis and verification techniques, and computational difficulty resolution. The section is concluded with a discussion of the computer output form and format

  8. Cognitive Computing for Security.

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rothganger, Fredrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marinella, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Brian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Warrender, Christina E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mickel, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

  9. Effectiveness of ESL Students' Performance by Computational Assessment and Role of Reading Strategies in Courseware-Implemented Business Translation Tasks

    Science.gov (United States)

    Tsai, Shu-Chiao

    2017-01-01

    This study reports on investigating students' English translation performance and their use of reading strategies in an elective English writing course offered to senior students of English as a Foreign Language for 100 minutes per week for 12 weeks. A courseware-implemented instruction combined with a task-based learning approach was adopted.…

  10. Implementation is crucial but must be neurobiologically grounded. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    Science.gov (United States)

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L.

    2014-09-01

    From the perspective of language, Fitch's [1] claim that theories of cognitive computation should not be separated from those of implementation surely deserves applauding. Recent developments in the Cognitive Neuroscience of Language, leading to the new field of the Neurobiology of Language [2-4], emphasise precisely this point: rather than attempting to simply map cognitive theories of language onto the brain, we should aspire to understand how the brain implements language. This perspective resonates with many of the points raised by Fitch in his review, such as the discussion of unhelpful dichotomies (e.g., Nature versus Nurture). Cognitive dichotomies and debates have repeatedly turned out to be of limited usefulness when it comes to understanding language in the brain. The famous modularity-versus-interactivity and dual route-versus-connectionist debates are cases in point: in spite of hundreds of experiments using neuroimaging (or other techniques), or the construction of myriad computer models, little progress has been made in their resolution. This suggests that dichotomies proposed at a purely cognitive (or computational) level without consideration of biological grounding appear to be "asking the wrong questions" about the neurobiology of language. In accordance with these developments, several recent proposals explicitly consider neurobiological constraints while seeking to explain language processing at a cognitive level (e.g. [5-7]).

  11. Analysis, design, and implementation of PHENIX on-line computing systems software using Shlaer-Mellor object-oriented analysis and recursive design

    International Nuclear Information System (INIS)

    Kozlowski, T.; Desmond, E.; Haggerty, J.

    1997-01-01

    An early prototype of the core software for on-line computing systems for the PHENIX detector at RHIC has been developed using the Shlaer-Mellor OOA/RD method, including the automatic generation of C++ source code using a commercial translation engine and open-quotes architectureclose quotes

  12. Learning in educational computer games for novices: the impact of implementation and delivery of support devices on virtual presence, cognitive load and learning outcomes

    NARCIS (Netherlands)

    Schrader, Claudia; Bastiaens, Theo

    2018-01-01

    Embedding support devices in educational computer games has been asserted to positively affect learning outcomes. However, there is only limited direct empirical evidence on which design variations of support provision influence learning. In order to better understand the impact of support design on

  13. [Use of the computer as a tool for the implementation of the nursing process--the experience of the Sâo Paulo/UNIFESP].

    Science.gov (United States)

    de Barros, Alba Lúcia; Fakih, Flávio Trevisani; Michel, Jeanne Liliane

    2002-01-01

    This article reports the pathway used to build a prototype of a computer nurse's clinical decision making support system, using NANDA, NIC and NOC classifications, as an auxiliary tool in the insertion of nursing data in the computerized patient record of Hospital São Paulo/UNIFESP.

  14. Building Capacity Through Hands-on Computational Internships to Assure Reproducible Results and Implementation of Digital Documentation in the ICERT REU Program

    Science.gov (United States)

    Gomez, R.; Gentle, J.

    2015-12-01

    Modern data pipelines and computational processes require that meticulous methodologies be applied in order to insure that the source data, algorithms, and results are properly curated, managed and retained while remaining discoverable, accessible, and reproducible. Given the complexity of understanding the scientific problem domain being researched, combined with the overhead of learning to use advanced computing technologies, it becomes paramount that the next generation of scientists and researchers learn to embrace best-practices. The Integrative Computational Education and Research Traineeship (ICERT) is a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at the Texas Advanced Computing Center (TACC). During Summer 2015, two ICERT interns joined the 3DDY project. 3DDY converts geospatial datasets into file types that can take advantage of new formats, such as natural user interfaces, interactive visualization, and 3D printing. Mentored by TACC researchers for ten weeks, students with no previous background in computational science learned to use scripts to build the first prototype of the 3DDY application, and leveraged Wrangler, the newest high performance computing (HPC) resource at TACC. Test datasets for quadrangles in central Texas were used to assemble the 3DDY workflow and code. Test files were successfully converted into a stereo lithographic (STL) format, which is amenable for use with a 3D printers. Test files and the scripts were documented and shared using the Figshare site while metadata was documented for the 3DDY application using OntoSoft. These efforts validated a straightforward set of workflows to transform geospatial data and established the first prototype version of 3DDY. Adding the data and software management procedures helped students realize a broader set of tangible results (e.g. Figshare entries), better document their progress and the final state of their work for the research group and community

  15. INREM II: a computer implementation of recent models for estimating the dose equivalent to organs of man from an inhaled or ingested radionuclide

    International Nuclear Information System (INIS)

    Killough, G.G.; Dunning, D.E. Jr.; Pleasant, J.C.

    1978-01-01

    This report describes a computer code, INREM II, which calculates the internal radiation dose equivalent to organs of man which results from the intake of a radionuclide by inhalation or ingestion. Deposition and removal of radioactivity from the respiratory tract is represented by the ICRP Task Group Lung Model. A four-segment catenary model of the GI tract is used to estimate movement of radioactive material that is ingested or swallowed after being cleared from the respiratory tract. Retention of radioactivity in other organs is specified by linear combinations of decaying exponential functions. The formation and decay of radioactive daughters is treated explicitly, with each radionuclide species in the chain having its own uptake and retention parameters, as supplied by the user. The dose equivalent to a target organ is computed as the sum of contributions from each source organ in which radioactivity is assumed to be situated. This calculation utilizes a matrix of S-factors (rem/μCi-day) supplied by the user for the particular choice of source and target organs. Output permits the evaluation of crossfire components of dose when penetrating radiations are present. INREM II is coded in FORTRAN IV and has been compiled and executed on an IBM-360 computer

  16. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  17. Quantum computing with trapped ions

    International Nuclear Information System (INIS)

    Haeffner, H.; Roos, C.F.; Blatt, R.

    2008-01-01

    Quantum computers hold the promise of solving certain computational tasks much more efficiently than classical computers. We review recent experimental advances towards a quantum computer with trapped ions. In particular, various implementations of qubits, quantum gates and some key experiments are discussed. Furthermore, we review some implementations of quantum algorithms such as a deterministic teleportation of quantum information and an error correction scheme

  18. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  19. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  20. Sepsis reconsidered: Identifying novel metrics for behavioral landscape characterization with a high-performance computing implementation of an agent-based model.

    Science.gov (United States)

    Cockrell, Chase; An, Gary

    2017-10-07

    Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we

  1. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface

    Science.gov (United States)

    Coons, Marc P.; Herbert, John M.

    2018-06-01

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  2. Development and implementation of computational geometric model for simulation of plate type fuel fabrication process with microspheres dispersed in metallic matrix

    International Nuclear Information System (INIS)

    Lage, Aldo M.F.; Reis, Sergio C.; Braga, Daniel M.; Santos, Armindo; Ferraz, Wilmar B.

    2005-01-01

    In this report it is presented the development of a geometric model to simulate the plate type fuel fabrication process with fuels microspheres dispersed in metallic matrix, as well as its software implementation. The developed geometric model encloses the steps of pellets pressing and sintering, as well as the plate rolling passes. The model permits the simulation of structures, where the values of the various variables of the fabrication processes can be studied and modified. The following variables were analyzed: microspheres diameters, density of the powder/microspheres mixing, microspheres density, fuel volume fraction, sintering densification, and rolling passes number. In the model implementation, which was codified in DELPHI programming language, systems of structured analysis techniques were utilized. The structures simulated were visualized utilizing the AutoCAD applicative, what permitted to obtain planes sections in diverse directions. The objective of this model is to enable the analysis of the simulated structures and supply information that can help in the improvement of the dispersion microspheres fuel plates fabrication process, now in development at CDTN (Centro de Desenvolvimento da Tecnologia Nuclear) in cooperation with the CTMSP (Centro Tecnologico da Marinha em Sao Paulo). (author)

  3. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  4. Implementation of double-C-arm synchronous real-time X-ray positioning system computer aided for aspiration biopsy of small lung lesion

    International Nuclear Information System (INIS)

    Zhu Hong; Wang Dong; Ye Yukun; Zhou Yuan; Lu Jianfeng; Yang Jingyu; Wang Lining

    2007-01-01

    Objective: To evaluate the feasibility of a new type of real-time three-dimensional X-ray positioning system for aspiration biopsy of small lung lesions. Methods: Using X-ray imaging technology and X-ray collimator technology and combining with double-C-arm X-ray machine, two different synchronous real-time images were obtained from the vertical to the horizontal plane. Then, with the computer image processing and computer vision processing technologies, dynamic tracking for 3D information of a pulmonary lesion and the needle in aspiration, and the relative position of the two, were established. Results: There was no interference while the two imaging perpendicularly X-ray beam met, two synchronous real-time image acquisition and tracking of a lung lesion and a needle could be completed in free respiration. The average positioning system error was about 0.5 mm, the largest positioning error was about 1.0 mm, real-time display rate was 5 screen/sec. Conclusions: the establishment of a new type of double-C-arm synchronous real-time X-ray positioning system is feasible. It is available for the fast and accurate aspiration biopsy of small lung lesions. (authors)

  5. EDMS implementation challenge.

    Science.gov (United States)

    De La Torre, Marta

    2002-08-01

    The challenges faced by facilities wishing to implement an electronic medical record system are complex and overwhelming. Issues such as customer acceptance, basic computer skills, and a thorough understanding of how the new system will impact work processes must be considered and acted upon. Acceptance and active support are necessary from Senior Administration and key departments to enable this project to achieve measurable success. This article details one hospital's "journey" through design and successful implementation of an electronic medical record system.

  6. Linking computer-aided design (CAD) to Geant4-based Monte Carlo simulations for precise implementation of complex treatment head geometries

    International Nuclear Information System (INIS)

    Constantin, Magdalena; Constantin, Dragos E; Keall, Paul J; Narula, Anisha; Svatos, Michelle; Perl, Joseph

    2010-01-01

    Most of the treatment head components of medical linear accelerators used in radiation therapy have complex geometrical shapes. They are typically designed using computer-aided design (CAD) applications. In Monte Carlo simulations of radiotherapy beam transport through the treatment head components, the relevant beam-generating and beam-modifying devices are inserted in the simulation toolkit using geometrical approximations of these components. Depending on their complexity, such approximations may introduce errors that can be propagated throughout the simulation. This drawback can be minimized by exporting a more precise geometry of the linac components from CAD and importing it into the Monte Carlo simulation environment. We present a technique that links three-dimensional CAD drawings of the treatment head components to Geant4 Monte Carlo simulations of dose deposition. (note)

  7. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  8. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  9. Exposure to particulate matters (PM2.5) and airborne nicotine in computer game rooms after implementation of smoke-free legislation in South Korea.

    Science.gov (United States)

    Kim, Sungroul; Sohn, Jongryeul; Lee, Kiyoung

    2010-12-01

    In South Korea, computer game rooms are subject to regulations mandating a designated nonsmoking area pursuant to Article 7 of the Enforcement Rules of the National Health Promotion Act; nonsmoking areas must be enclosed on all sides by solid and impermeable partitions. Using PM(2.5) monitors (SidePak AM510) and airborne nicotine monitors, we measured concentrations in smoking and nonsmoking areas to examine whether separation of the nonsmoking areas as currently practiced is a viable way to protect the nonsmoking area from secondhand smoke exposure. Convenient samplings were conducted at 28 computer game rooms randomly selected from 14 districts in Seoul, South Korea between August and September 2009. The medians (interquartile range) of PM(2.5) concentrations in smoking and nonsmoking areas were 69.3 μg/m(3) (34.5-116.5 μg/m(3)) and 34 μg/m(3) (15.0-57.0 μg/m(3)), while those of airborne nicotine were 0.41 μg/m(3) (0.25-0.69 μg/m(3)) and 0.12 μg/m(3) (0.06-0.16 μg/m(3)), respectively. Concentrations of airborne nicotine and PM(2.5) in nonsmoking areas were substantially positively associated with those in smoking areas. The Spearman correlation coefficients for them were 0.68 (p = .02) and 0.1 (p = 0.7), respectively. According to our modeling result, unit increase of airborne nicotine concentration in a smoking area contributed to 7 (95% CI = 2.5-19.8) times increase of the concentration in the adjacent nonsmoking area after controlling for the degree of partition left closed and the indoor space volume. Our study thus provides evidence for the introduction of more rigorous policy initiatives aimed at encouraging a complete smoking ban in such venues.

  10. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  11. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  13. Improving radiation awareness and feeling of personal security of non-radiological medical staff by implementing a traffic light system in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Heilmaier, C.; Mayor, A.; Zuber, N.; Weishaupt, D. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Radiology; Fodor, P. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Anesthesiology and Intensive Care Medicine

    2016-03-15

    Non-radiological medical professionals often need to remain in the scanning room during computed tomography (CT) examinations to supervise patients in critical condition. Independent of protective devices, their position significantly influences the radiation dose they receive. The purpose of this study was to assess if a traffic light system indicating areas of different radiation exposure improves non-radiological medical staff's radiation awareness and feeling of personal security. Phantom measurements were performed to define areas of different dose rates and colored stickers were applied on the floor according to a traffic light system: green = lowest, orange = intermediate, and red = highest possible radiation exposure. Non-radiological medical professionals with different years of working experience evaluated the system using a structured questionnaire. Kruskal-Wallis and Spearman's correlation test were applied for statistical analysis. Fifty-six subjects (30 physicians, 26 nursing staff) took part in this prospective study. Overall rating of the system was very good, and almost all professionals tried to stand in the green stickers during the scan. The system significantly increased radiation awareness and feeling of personal protection particularly in staff with ? 5 years of working experience (p < 0.05). The majority of non-radiological medical professionals stated that staying in the green stickers and patient care would be compatible. Knowledge of radiation protection was poor in all groups, especially among entry-level employees (p < 0.05). A traffic light system in the CT scanning room indicating areas with lowest, in-termediate, and highest possible radiation exposure is much appreciated. It increases radiation awareness, improves the sense of personal radiation protection, and may support endeavors to lower occupational radiation exposure, although the best radiation protection always is to re-main outside the CT room during the scan.

  14. Improving radiation awareness and feeling of personal security of non-radiological medical staff by implementing a traffic light system in computed tomography

    International Nuclear Information System (INIS)

    Heilmaier, C.; Mayor, A.; Zuber, N.; Weishaupt, D.; Fodor, P.

    2016-01-01

    Non-radiological medical professionals often need to remain in the scanning room during computed tomography (CT) examinations to supervise patients in critical condition. Independent of protective devices, their position significantly influences the radiation dose they receive. The purpose of this study was to assess if a traffic light system indicating areas of different radiation exposure improves non-radiological medical staff's radiation awareness and feeling of personal security. Phantom measurements were performed to define areas of different dose rates and colored stickers were applied on the floor according to a traffic light system: green = lowest, orange = intermediate, and red = highest possible radiation exposure. Non-radiological medical professionals with different years of working experience evaluated the system using a structured questionnaire. Kruskal-Wallis and Spearman's correlation test were applied for statistical analysis. Fifty-six subjects (30 physicians, 26 nursing staff) took part in this prospective study. Overall rating of the system was very good, and almost all professionals tried to stand in the green stickers during the scan. The system significantly increased radiation awareness and feeling of personal protection particularly in staff with ? 5 years of working experience (p < 0.05). The majority of non-radiological medical professionals stated that staying in the green stickers and patient care would be compatible. Knowledge of radiation protection was poor in all groups, especially among entry-level employees (p < 0.05). A traffic light system in the CT scanning room indicating areas with lowest, in-termediate, and highest possible radiation exposure is much appreciated. It increases radiation awareness, improves the sense of personal radiation protection, and may support endeavors to lower occupational radiation exposure, although the best radiation protection always is to re-main outside the CT room during the scan.

  15. Pilot Implementations

    DEFF Research Database (Denmark)

    Manikas, Maria Ie

    by conducting a literature review. The concept of pilot implementation, although commonly used in practice, is rather disregarded in research. In the literature, pilot implementations are mainly treated as secondary to the learning outcomes and are presented as merely a means to acquire knowledge about a given...... objective. The prevalent understanding is that pilot implementations are an ISD technique that extends prototyping from the lab and into test during real use. Another perception is that pilot implementations are a project multiple of co-existing enactments of the pilot implementation. From this perspective......This PhD dissertation engages in the study of pilot (system) implementation. In the field of information systems, pilot implementations are commissioned as a way to learn from real use of a pilot system with real data, by real users during an information systems development (ISD) project and before...

  16. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation.

    Science.gov (United States)

    Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge

    2014-09-01

    An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Pilot implementation

    DEFF Research Database (Denmark)

    Hertzum, Morten; Bansler, Jørgen P.; Havn, Erling C.

    2012-01-01

    A recurrent problem in information-systems development (ISD) is that many design shortcomings are not detected during development, but first after the system has been delivered and implemented in its intended environment. Pilot implementations appear to promise a way to extend prototyping from...... the laboratory to the field, thereby allowing users to experience a system design under realistic conditions and developers to get feedback from realistic use while the design is still malleable. We characterize pilot implementation, contrast it with prototyping, propose a iveelement model of pilot...... implementation and provide three empirical illustrations of our model. We conclude that pilot implementation has much merit as an ISD technique when system performance is contingent on context. But we also warn developers that, despite their seductive conceptual simplicity, pilot implementations can be difficult...

  18. Vectorization, parallelization and implementation of nuclear codes =MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3= on the VPP500 computer system. Progress report 1995 fiscal year

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo [Fujitsu Ltd., Tokyo (Japan); Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru

    1996-06-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  19. Vectorization, parallelization and implementation of nuclear codes [MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3] on the VPP500 computer system. Progress report 1995 fiscal year

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo; Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru.

    1996-07-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  20. Quantum mechanics and computation

    International Nuclear Information System (INIS)

    Cirac Sasturain, J. I.

    2000-01-01

    We review how some of the basic principles of Quantum Mechanics can be used in the field of computation. In particular, we explain why a quantum computer can perform certain tasks in a much more efficient way than the computers we have available nowadays. We give the requirements for a quantum system to be able to implement a quantum computer and illustrate these requirements in some particular physical situations. (Author) 16 refs

  1. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  2. Quantum computer science

    CERN Document Server

    Lanzagorta, Marco

    2009-01-01

    In this text we present a technical overview of the emerging field of quantum computation along with new research results by the authors. What distinguishes our presentation from that of others is our focus on the relationship between quantum computation and computer science. Specifically, our emphasis is on the computational model of quantum computing rather than on the engineering issues associated with its physical implementation. We adopt this approach for the same reason that a book on computer programming doesn't cover the theory and physical realization of semiconductors. Another distin

  3. Prospective Algorithms for Quantum Evolutionary Computation

    OpenAIRE

    Sofge, Donald A.

    2008-01-01

    This effort examines the intersection of the emerging field of quantum computing and the more established field of evolutionary computation. The goal is to understand what benefits quantum computing might offer to computational intelligence and how computational intelligence paradigms might be implemented as quantum programs to be run on a future quantum computer. We critically examine proposed algorithms and methods for implementing computational intelligence paradigms, primarily focused on ...

  4. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  5. Implementing and testing program PLOTTAB

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1988-01-01

    Enclosed is a description of the magnetic tape or floppy diskette containing the PLOTTAB code package. In addition detailed information is provided on implementation and testing of this code. See part I for mainframe computers; part II for personal computers. These codes are documented in IAEA-NDS-82. (author)

  6. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  7. Treaty implementation

    International Nuclear Information System (INIS)

    Dunn, L.A.

    1990-01-01

    This paper touches on three aspects of the relationship between intelligence and treaty implementation, a two-way association. First the author discusses the role of intelligence as a basis for compliance monitoring and treaty verification. Second the authors discusses payoffs of intelligence gathering and the intelligence process of treaty implementation, in particular on-site inspection. Third, the author goes in another direction and discusses some of the tensions between the intelligence gathering and treaty-implementation processes, especially with regard to extensive use of on-site inspection, such as we are likely to see in monitoring compliance of future arms control treaties

  8. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  9. Quantum computing for physics research

    International Nuclear Information System (INIS)

    Georgeot, B.

    2006-01-01

    Quantum computers hold great promises for the future of computation. In this paper, this new kind of computing device is presented, together with a short survey of the status of research in this field. The principal algorithms are introduced, with an emphasis on the applications of quantum computing to physics. Experimental implementations are also briefly discussed

  10. Implementation of the Kids-CAT in clinical settings: a newly developed computer-adaptive test to facilitate the assessment of patient-reported outcomes of children and adolescents in clinical practice in Germany.

    Science.gov (United States)

    Barthel, D; Fischer, K I; Nolte, S; Otto, C; Meyrose, A-K; Reisinger, S; Dabs, M; Thyen, U; Klein, M; Muehlan, H; Ankermann, T; Walter, O; Rose, M; Ravens-Sieberer, U

    2016-03-01

    To describe the implementation process of a computer-adaptive test (CAT) for measuring health-related quality of life (HRQoL) of children and adolescents in two pediatric clinics in Germany. The study focuses on the feasibility and user experience with the Kids-CAT, particularly the patients' experience with the tool and the pediatricians' experience with the Kids-CAT Report. The Kids-CAT was completed by 312 children and adolescents with asthma, diabetes or rheumatoid arthritis. The test was applied during four clinical visits over a 1-year period. A feedback report with the test results was made available to the pediatricians. To assess both feasibility and acceptability, a multimethod research design was used. To assess the patients' experience with the tool, the children and adolescents completed a questionnaire. To assess the clinicians' experience, two focus groups were conducted with eight pediatricians. The children and adolescents indicated that the Kids-CAT was easy to complete. All pediatricians reported that the Kids-CAT was straightforward and easy to understand and integrate into clinical practice; they also expressed that routine implementation of the tool would be desirable and that the report was a valuable source of information, facilitating the assessment of self-reported HRQoL of their patients. The Kids-CAT was considered an efficient and valuable tool for assessing HRQoL in children and adolescents. The Kids-CAT Report promises to be a useful adjunct to standard clinical care with the potential to improve patient-physician communication, enabling pediatricians to evaluate and monitor their young patients' self-reported HRQoL.

  11. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  12. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  13. Computers in writing instruction

    NARCIS (Netherlands)

    Schwartz, Helen J.; van der Geest, Thea; Smit-Kreuzen, Marlies

    1992-01-01

    For computers to be useful in writing instruction, innovations should be valuable for students and feasible for teachers to implement. Research findings yield contradictory results in measuring the effects of different uses of computers in writing, in part because of the methodological complexity of

  14. All-optical reservoir computing.

    Science.gov (United States)

    Duport, François; Schneider, Bendix; Smerieri, Anteo; Haelterman, Marc; Massar, Serge

    2012-09-24

    Reservoir Computing is a novel computing paradigm that uses a nonlinear recurrent dynamical system to carry out information processing. Recent electronic and optoelectronic Reservoir Computers based on an architecture with a single nonlinear node and a delay loop have shown performance on standardized tasks comparable to state-of-the-art digital implementations. Here we report an all-optical implementation of a Reservoir Computer, made of off-the-shelf components for optical telecommunications. It uses the saturation of a semiconductor optical amplifier as nonlinearity. The present work shows that, within the Reservoir Computing paradigm, all-optical computing with state-of-the-art performance is possible.

  15. Elusive Implementation

    DEFF Research Database (Denmark)

    Heering Holt, Ditte; Rod, Morten Hulvej; Waldorff, Susanne Boch

    2018-01-01

    in health. However, despite growing support for intersectoral policymaking, implementation remains a challenge. Critics argue that public health has remained naïve about the policy process and a better understanding is needed. Based on ethnographic data, this paper conducts an in-depth analysis of a local......: On the basis of an explorative study among ten Danish municipalities, we conducted an ethnographic study of the development of a municipal-wide implementation strategy for the intersectoral health policy of a medium-sized municipality. The main data sources consist of ethnographic field notes from participant...

  16. Research in computer science

    Science.gov (United States)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  17. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  18. Implementation Politics

    DEFF Research Database (Denmark)

    Hegland, Troels Jacob; Raakjær, Jesper

    2008-01-01

    level are supplemented or even replaced by national priorities. The chapter concludes that in order to capture the domestic politics associated with CFP implementation in Denmark, it is important to understand the policy process as a synergistic interaction between dominant interests, policy alliances...

  19. Implementation Strategy

    Science.gov (United States)

    1983-01-01

    Meeting the identified needs of Earth science requires approaching EOS as an information system and not simply as one or more satellites with instruments. Six elements of strategy are outlined as follows: implementation of the individual discipline missions as currently planned; use of sustained observational capabilities offered by operational satellites without waiting for the launch of new mission; put first priority on the data system; deploy an Advanced Data Collection and Location System; put a substantial new observing capability in a low Earth orbit in such a way as to provide for sustained measurements; and group instruments to exploit their capabilities for synergism; maximize the scientific utility of the mission; and minimize the costs of implementation where possible.

  20. Implementing Pseudonymity

    Directory of Open Access Journals (Sweden)

    Miranda Mowbray

    2006-03-01

    Full Text Available I will give an overview of some technologies that enable pseudonymity - allowing individuals to reveal or prove information about themselves to others without revealing their full identity. I will describe some functionalities relating to pseudonymity that can be implemented, and some that cannot. My intention is to present enough of the mathematics that underlies technology for pseudonymity to show that it is indeed possible to implement some functionalities that at first glance may appear impossible. In particular, I will show that several of the intended functions of the UK national ID could be provided in a pseudonymous fashion, allowing greater privacy. I will also outline some technology developed at HP Labs which ensures that users’ personal data is released only to software that has been checked to conform to their preferred privacy policies.

  1. Fusion Implementation

    International Nuclear Information System (INIS)

    Schmidt, J.A.

    2002-01-01

    If a fusion DEMO reactor can be brought into operation during the first half of this century, fusion power production can have a significant impact on carbon dioxide production during the latter half of the century. An assessment of fusion implementation scenarios shows that the resource demands and waste production associated with these scenarios are manageable factors. If fusion is implemented during the latter half of this century it will be one element of a portfolio of (hopefully) carbon dioxide limiting sources of electrical power. It is time to assess the regional implications of fusion power implementation. An important attribute of fusion power is the wide range of possible regions of the country, or countries in the world, where power plants can be located. Unlike most renewable energy options, fusion energy will function within a local distribution system and not require costly, and difficult, long distance transmission systems. For example, the East Coast of the United States is a prime candidate for fusion power deployment by virtue of its distance from renewable energy sources. As fossil fuels become less and less available as an energy option, the transmission of energy across bodies of water will become very expensive. On a global scale, fusion power will be particularly attractive for regions separated from sources of renewable energy by oceans

  2. Initial findings from a mixed-methods evaluation of computer-assisted therapy for substance misuse in prisoners: Development, implementation and clinical outcomes from the ‘Breaking Free Health & Justice’ treatment and recovery programme

    Directory of Open Access Journals (Sweden)

    Sarah Elison

    2015-08-01

    Full Text Available Background: Within the United Kingdom’s ‘Transforming Rehabilitation’ agenda, reshaping drug and alcohol interventions in prisons is central to the Government’s approach to addressing substance dependence in the prison population and reduce reoffending. To achieve this, a through-care project to support offenders following release, ‘Gateways’, is taking place providing ‘through the gate’ support to released offenders, including help with organising accommodation, education and employment, and access to a peer supporter. In addition, Gateways is providing access to an evidence-based computer-assisted therapy (CAT programme for substance misuse, Breaking Free Health & Justice (BFHJ. Developed in partnership with the Ministry of Justice (MoJ National Offender Management Services (NOMS, and based on a community version of the programme, Breaking Free Online (BFO, BFHJ provides access to clinically-robust techniques based on cognitive behavioural therapy (CBT and promotes the role of technology-enhanced approaches in recovery from substance misuse. The BFHJ programme is provided via ‘Virtual Campus’ (VC, a secure, web-based learning environment delivered by NOMS and the Department for Business, Innovation and Skills, which has no links to websites not approved by MoJ, and provides prisoners with access to online training courses around work and skills. Providing BFHJ on VC makes the programme the world’s first online healthcare programme to be provided in prisons. Aims: Although here is an emerging evidence-base for the effectiveness of the community version of the BFO programme and its implementation within community treatment settings (Davies, Elison, Ward, & Laudet, 2015; Elison, Davies, & Ward, 2015a, 2015b; Elison, Humphreys, Ward, & Davies, 2013; Elison, Ward, Davies, Lidbetter, et al., 2014; Elison, Ward, Davies, & Moody, 2014, its potential within prison settings requires exploration. This study therefore sought to

  3. Software For Computing Selected Functions

    Science.gov (United States)

    Grant, David C.

    1992-01-01

    Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.

  4. Implementation of a cluster Beowulf

    International Nuclear Information System (INIS)

    Victorino Guzman, Jorge Enrique

    2001-01-01

    One of the simulation systems that put a great stress on computational resources and performance are the climatic models, with a high cost of implementation, making difficult its acquisition. An alternative that offers good performance at a reasonable cost is the construction of Cluster Beowulf that allows to emulate the behaviour of a computer with several processors. In the present article we discuss the requirements of hardware for the construction of the Cluster Beowulf, the software resources for the implementation of the model CCM3.6 and the performance of the Cluster Beowulf, of the Group of Investigation in Meteorology at the National University of Colombia, with different number of processors

  5. Document Management Projects: implementation guide

    OpenAIRE

    Beatriz Bagoin Guimarães

    2016-01-01

    Records Management System implementation is a complex process that needs to be executed by a multidisciplinary team and involves components of apparently non-related areas such as archival science, computer engineering, law, project management and human resource management. All of them are crucial and complementary to guarantee a full and functional implementation of a system and a perfect fusion with the connected processes and procedures. The purpose of this work is to provide organizations...

  6. Microservices Validation: Methodology and Implementation

    OpenAIRE

    Savchenko, D.; Radchenko, G.

    2015-01-01

    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservic...

  7. Cloud Computing Governance Lifecycle

    Directory of Open Access Journals (Sweden)

    Soňa Karkošková

    2016-06-01

    Full Text Available Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is unclear how to achieve them. Cloud computing governance helps to create business value through obtain benefits from use of cloud computing services while optimizing investment and risk. Challenge, which organizations are facing in relation to governing of cloud services, is how to design and implement cloud computing governance to gain expected benefits. This paper aims to provide guidance on implementation activities of proposed Cloud computing governance lifecycle from cloud consumer perspective. Proposed model is based on SOA Governance Framework and consists of lifecycle for implementation and continuous improvement of cloud computing governance model.

  8. Computer assisted roentgenology

    International Nuclear Information System (INIS)

    Trajkova, N.; Velkova, K.

    1999-01-01

    This is a report on the potentials and superiorities of computer tomography (CT), assumed as an up-to-date imaging examination method in medicine. The current trend in the development of computer assisted roentgenology consists in the implementation of new computer and communication systems promoting diagnostic and therapeutic activities. CT-study application is discussed with special reference to diagnosis and treatment of brain, lung, mediastinal and abdominal diseases. The new trends in the particular implementation of CT are presented, namely: CT-assisted biopsy, CT-assisted abscess drainage, drug administration under CT control, as well as the wide use of CT in orthopaedic surgery, otorinolaryngology etc. Also emphasis is laid on the important role played by three-dimensional technologies in computer-assisted surgery, leading to qualitatively new stage in the surgical therapeutic approach to patients

  9. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  10. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  11. Upgrade Software and Computing

    CERN Document Server

    The LHCb Collaboration, CERN

    2018-01-01

    This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis.

  12. Layered architecture for quantum computing

    OpenAIRE

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G.; McMahon, Peter L.; Kim, Jungsang; Ladd, Thaddeus D.; Yamamoto, Yoshihisa

    2010-01-01

    We develop a layered quantum-computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction. In doing so, we propose a new quantum-computer architecture based on optical control of quantum dot...

  13. Implementation of computational model for the evaluation of electromagnetic susceptibility of the cables for communication and control of high voltage substations; Implementacao de modelo computacional para a avaliacao da suscetibilidade eletromagnetica dos cabos de comunicacao e controle de subestacoes de alta tensao

    Energy Technology Data Exchange (ETDEWEB)

    Sartin, Antonio C.P. [Companhia de Transmissao de Energia Eletrica Paulista (CTEEP), Bauru, SP (Brazil); Dotto, Fabio R.L.; Sant' Anna, Cezar J.; Thomazella, Rogerio [Fundacao para o Desenvolvimento de Bauru, SP (Brazil); Ulson, Jose A.C.; Aguiar, Paulo R. de [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Bauru, SP (Brazil)

    2009-07-01

    This work show the implementation of a electromagnetic model for supervision cable, protection, communication and high voltage substations control that was investigated in literature and adapted. The model was implemented by using a computational tool in order to obtain the electromagnetic behavior of various cables used in CTEEP substation, subject to several sources of electromagnetic interference in this inhospitable environment, such as lightning strikes, outbreaks of maneuvers switching and the corona effect. The results obtained in computer simulations were compared with results of laboratory tests carried out on a lot of cables that represent those systems that are present in substations 440 kV. This study characterized the electromagnetic interference, ranked them, identified possible susceptible points in the substation, which contributed to the development of a technical procedure that minimizes unwanted effects caused in communication systems and substation control. This developed procedure also assured the maximum reliability and availability in the operation of the electrical power system to the company.

  14. A Pharmacy Computer System

    OpenAIRE

    Claudia CIULCA-VLADAIA; Călin MUNTEAN

    2009-01-01

    Objective: Describing a model of evaluation seen from a customer’s point of view for the current needed pharmacy computer system. Data Sources: literature research, ATTOFARM, WINFARM P.N.S., NETFARM, Info World - PHARMACY MANAGER and HIPOCRATE FARMACIE. Study Selection: Five Pharmacy Computer Systems were selected due to their high rates of implementing at a national level. We used the new criteria recommended by EUROREC Institute in EHR that modifies the model of data exchanges between the E...

  15. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  16. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  17. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  18. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  19. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  20. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  1. Experimental Demonstrations of Optical Neural Computers

    OpenAIRE

    Hsu, Ken; Brady, David; Psaltis, Demetri

    1988-01-01

    We describe two experiments in optical neural computing. In the first a closed optical feedback loop is used to implement auto-associative image recall. In the second a perceptron-like learning algorithm is implemented with photorefractive holography.

  2. The programming language 'PEARL' and its implementation

    International Nuclear Information System (INIS)

    Pelz, K.

    1978-01-01

    This paper describes the real time programming language PEARL, its history and design principles and the portability techniques involved in the implementation of a subset of the language on four computer systems. (Auth.)

  3. Computing with synthetic protocells.

    Science.gov (United States)

    Courbet, Alexis; Molina, Franck; Amar, Patrick

    2015-09-01

    In this article we present a new kind of computing device that uses biochemical reactions networks as building blocks to implement logic gates. The architecture of a computing machine relies on these generic and composable building blocks, computation units, that can be used in multiple instances to perform complex boolean functions. Standard logical operations are implemented by biochemical networks, encapsulated and insulated within synthetic vesicles called protocells. These protocells are capable of exchanging energy and information with each other through transmembrane electron transfer. In the paradigm of computation we propose, protoputing, a machine can solve only one problem and therefore has to be built specifically. Thus, the programming phase in the standard computing paradigm is represented in our approach by the set of assembly instructions (specific attachments) that directs the wiring of the protocells that constitute the machine itself. To demonstrate the computing power of protocellular machines, we apply it to solve a NP-complete problem, known to be very demanding in computing power, the 3-SAT problem. We show how to program the assembly of a machine that can verify the satisfiability of a given boolean formula. Then we show how to use the massive parallelism of these machines to verify in less than 20 min all the valuations of the input variables and output a fluorescent signal when the formula is satisfiable or no signal at all otherwise.

  4. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level......., by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete...

  5. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  6. Computational Ocean Acoustics

    CERN Document Server

    Jensen, Finn B; Porter, Michael B; Schmidt, Henrik

    2011-01-01

    Since the mid-1970s, the computer has played an increasingly pivotal role in the field of ocean acoustics. Faster and less expensive than actual ocean experiments, and capable of accommodating the full complexity of the acoustic problem, numerical models are now standard research tools in ocean laboratories. The progress made in computational ocean acoustics over the last thirty years is summed up in this authoritative and innovatively illustrated new text. Written by some of the field's pioneers, all Fellows of the Acoustical Society of America, Computational Ocean Acoustics presents the latest numerical techniques for solving the wave equation in heterogeneous fluid–solid media. The authors discuss various computational schemes in detail, emphasizing the importance of theoretical foundations that lead directly to numerical implementations for real ocean environments. To further clarify the presentation, the fundamental propagation features of the techniques are illustrated in color. Computational Ocean A...

  7. Proto-computational Thinking

    DEFF Research Database (Denmark)

    Tatar, Deborah Gail; Harrison, Steve; Stewart, Michael

    2017-01-01

    . Utilizing university students in co-development activities with teachers, the current study located and implemented opportunities for integrated computational thinking in middle school in a large, suburban, mixed-socioeconomic standing (SES) , mixed-race district. The co-development strategy resulted...

  8. Computing Tropical Varieties

    DEFF Research Database (Denmark)

    Speyer, D.; Jensen, Anders Nedergaard; Bogart, T.

    2005-01-01

    The tropical variety of a d-dimensional prime ideal in a polynomial ring with complex coefficients is a pure d-dimensional polyhedral fan. This fan is shown to be connected in codimension one. We present algorithmic tools for computing the tropical variety, and we discuss our implementation...

  9. Document Management Projects: implementation guide

    Directory of Open Access Journals (Sweden)

    Beatriz Bagoin Guimarães

    2016-12-01

    Full Text Available Records Management System implementation is a complex process that needs to be executed by a multidisciplinary team and involves components of apparently non-related areas such as archival science, computer engineering, law, project management and human resource management. All of them are crucial and complementary to guarantee a full and functional implementation of a system and a perfect fusion with the connected processes and procedures. The purpose of this work is to provide organizations with a basic guide to Records Management Project implementation beginning with the steps prior to acquiring the system, following with the main project activities and concluding with the post implementation procedures of continuous improvement and system maintenance.

  10. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    Science.gov (United States)

    Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom

    2004-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  11. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  12. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  13. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  14. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  15. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  16. Computing one of Victor Moll's irresistible integrals with computer algebra

    Directory of Open Access Journals (Sweden)

    Christoph Koutschan

    2008-04-01

    Full Text Available We investigate a certain quartic integral from V. Moll's book “Irresistible Integrals” and demonstrate how it can be solved by computer algebra methods, namely by using non-commutative Gröbner bases. We present recent implementations in the computer algebra systems SINGULAR and MATHEMATICA.

  17. Implementing ‘Site BIM’

    DEFF Research Database (Denmark)

    Davies, Richard; Harty, Chris

    2013-01-01

    Numerous Building Information Modelling (BIM) tools are well established and potentially beneficial in certain uses. However, issues of adoption and implementation persist, particularly for on-site use of BIM tools in the construction phase. We describe an empirical case-study of the implementation...... of an innovative ‘Site BIM’ system on a major hospital construction project. The main contractor on the project developed BIM-enabled tools to allow site workers using mobile tablet personal computers to access design information and to capture work quality and progress data on-site. Accounts show that ‘Site BIM...

  18. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  19. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  20. The Status of Ubiquitous Computing.

    Science.gov (United States)

    Brown, David G.; Petitto, Karen R.

    2003-01-01

    Explains the prevalence and rationale of ubiquitous computing on college campuses--teaching with the assumption or expectation that all faculty and students have access to the Internet--and offers lessons learned by pioneering institutions. Lessons learned involve planning, technology, implementation and management, adoption of computer-enhanced…

  1. Implementation of the monitor concept

    Energy Technology Data Exchange (ETDEWEB)

    Gerstenberger, M.

    1982-01-01

    Sequential and parallel computer programs are contrasted, and the problems of implementing compilers are stated, with special reference to the Pascal language. Process and monitor data types in computer programming are described, various procedures are listed and the monitor concept is applied to a generator-user problem. A Pascal initiator program is listed. It is claimed that the monitor approach can yield lower fault levels than assembler or semaphore approaches. It is pointed out that monitors for program synchronisation are only applicable for common, and not for distributed, storage systems. 9 references.

  2. Fast Computing for Distance Covariance

    OpenAIRE

    Huo, Xiaoming; Szekely, Gabor J.

    2014-01-01

    Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...

  3. Physical computation and cognitive science

    CERN Document Server

    Fresco, Nir

    2014-01-01

    This book presents a study of digital computation in contemporary cognitive science. Digital computation is a highly ambiguous concept, as there is no common core definition for it in cognitive science. Since this concept plays a central role in cognitive theory, an adequate cognitive explanation requires an explicit account of digital computation. More specifically, it requires an account of how digital computation is implemented in physical systems. The main challenge is to deliver an account encompassing the multiple types of existing models of computation without ending up in pancomputationalism, that is, the view that every physical system is a digital computing system. This book shows that only two accounts, among the ones examined by the author, are adequate for explaining physical computation. One of them is the instructional information processing account, which is developed here for the first time.   “This book provides a thorough and timely analysis of differing accounts of computation while adv...

  4. Implementierung und Evaluierung eines tutoriell betreuten elektronischen Biochemie-Praktikumsversuchs "Polymerase-Kettenreaktion (PCR" im vorklinischen Studienabschnitt [Implementation and Evaluation of a Tutor-Supported Computer-Based Practical Biochemistry Course "Polymerase Chain Reaction (PCR" in Preclinical Education

    Directory of Open Access Journals (Sweden)

    Kröncke, Klaus-Dietrich

    2008-08-01

    Full Text Available [english] Background: The polymerase chain reaction (PCR is an example of a technology that for many medical students is not easy to understand. We investigated whether a tutor-supported e-learning teaching unit is suitable to teach undergraduate medical students the PCR. Methods: We developed a computer-based practical course (attendance is compulsory that uses an open source-based system as a learning platform. The students learned to search in scientific medical databases to find PCR-relevant data. In addition, they learned the essential features of the PCR with the aid of embedded textual information and audiovisual animations. To check that the learning objectives were fulfilled, the students had to solve medical-related PCR tasks on the computer. Results: In total, 311 students went through the course. They were satisfied with the e-learning teaching unit and evaluated it very positively, independently of their prior knowledge concerning the PCR. Students with low levels of computer skills did not feel over-challenged. Conclusion: Our results show that a computer-based practical training course is an excellent option for teaching undergraduate medical students complex technologies that could otherwise only be taught in a laboratory at great expense of time and effort. [german] Zielsetzung: Es sollte untersucht werden, ob ein E-Learning Praktikumsversuch geeignet ist, Medizin-Studierenden der Vorklinik die Polymerase-Kettenreaktion (PCR begreiflich zu machen. Methodik: Ein Computer-basierter Praktikumsversuch wurde entwickelt, der sowohl das Recherchieren in wissenschaftlich-medizinischen Datenbanken zum Auffinden von PCR-relevanten Informationen beinhaltet als auch Selbstlerneinheiten über die PCR. Als Lernzielkontrollen dienten Aufgaben aus der medizinischen PCR-Diagnostik. Die Akzeptanz der Lehrveranstaltung bei den Studierenden wurde mit einem Fragebogen ermittelt, der 19 Items enthielt. Ergebnisse: 311 Studierende absolvierten den

  5. Computer security engineering management

    International Nuclear Information System (INIS)

    McDonald, G.W.

    1988-01-01

    For best results, computer security should be engineered into a system during its development rather than being appended later on. This paper addresses the implementation of computer security in eight stages through the life cycle of the system; starting with the definition of security policies and ending with continuing support for the security aspects of the system throughout its operational life cycle. Security policy is addressed relative to successive decomposition of security objectives (through policy, standard, and control stages) into system security requirements. This is followed by a discussion of computer security organization and responsibilities. Next the paper directs itself to analysis and management of security-related risks, followed by discussion of design and development of the system itself. Discussion of security test and evaluation preparations, and approval to operate (certification and accreditation), is followed by discussion of computer security training for users is followed by coverage of life cycle support for the security of the system

  6. Minimal ancilla mediated quantum computation

    International Nuclear Information System (INIS)

    Proctor, Timothy J.; Kendon, Viv

    2014-01-01

    Schemes of universal quantum computation in which the interactions between the computational elements, in a computational register, are mediated by some ancillary system are of interest due to their relevance to the physical implementation of a quantum computer. Furthermore, reducing the level of control required over both the ancillary and register systems has the potential to simplify any experimental implementation. In this paper we consider how to minimise the control needed to implement universal quantum computation in an ancilla-mediated fashion. Considering computational schemes which require no measurements and hence evolve by unitary dynamics for the global system, we show that when employing an ancilla qubit there are certain fixed-time ancilla-register interactions which, along with ancilla initialisation in the computational basis, are universal for quantum computation with no additional control of either the ancilla or the register. We develop two distinct models based on locally inequivalent interactions and we then discuss the relationship between these unitary models and the measurement-based ancilla-mediated models known as ancilla-driven quantum computation. (orig.)

  7. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  8. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  9. Computer implemented land cover classification using LANDSAT MSS digital data: A cooperative research project between the National Park Service and NASA. 3: Vegetation and other land cover analysis of Shenandoah National Park

    Science.gov (United States)

    Cibula, W. G.

    1981-01-01

    Four LANDSAT frames, each corresponding to one of the four seasons were spectrally classified and processed using NASA-developed computer programs. One data set was selected or two or more data sets were marged to improve surface cover classifications. Selected areas representing each spectral class were chosen and transferred to USGS 1:62,500 topographic maps for field use. Ground truth data were gathered to verify the accuracy of the classifications. Acreages were computed for each of the land cover types. The application of elevational data to seasonal LANDSAT frames resulted in the separation of high elevation meadows (both with and without recently emergent perennial vegetation) as well as areas in oak forests which have an evergreen understory as opposed to other areas which do not.

  10. Interfacing the Paramesh Computational Libraries to the Cactus Computational Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and implement an interface between the Paramesh computational libraries, developed and used by groups at NASA GSFC, and the Cactus computational...

  11. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  12. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  13. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  14. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  15. Spatial Computation

    Science.gov (United States)

    2003-12-01

    Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address

  16. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  17. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  18. Asynchronous Multiparty Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Geisler, Martin; Krøigaard, Mikkel

    2009-01-01

    guarantees termination if the adversary allows a preprocessing phase to terminate, in which no information is released. The communication complexity of this protocol is the same as that of a passively secure solution up to a constant factor. It is secure against an adaptive and active adversary corrupting...... less than n/3 players. We also present a software framework for implementation of asynchronous protocols called VIFF (Virtual Ideal Functionality Framework), which allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated...... multithreading. Benchmarking of a VIFF implementation of our protocol confirms that it is applicable to practical non-trivial secure computations....

  19. Roadmap for Peridynamic Software Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The application of peridynamics for engineering analysis requires an efficient and robust software implementation. Key elements include processing of the discretization, the proximity search for identification of pairwise interactions, evaluation of the con- stitutive model, application of a bond-damage law, and contact modeling. Additional requirements may arise from the choice of time integration scheme, for example esti- mation of the maximum stable time step for explicit schemes, and construction of the tangent stiffness matrix for many implicit approaches. This report summaries progress to date on the software implementation of the peridynamic theory of solid mechanics. Discussion is focused on parallel implementation of the meshfree discretization scheme of Silling and Askari [33] in three dimensions, although much of the discussion applies to computational peridynamics in general.

  20. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn