WorldWideScience

Sample records for computing fy07-08 implementation

  1. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. (2) Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. (3) Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  2. FY07-08 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Baron, A L

    2006-09-06

    one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools. Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  3. Visual implementation of computer communication

    OpenAIRE

    Gunnarsson, Tobias; Johansson, Hans

    2010-01-01

    Communication is a fundamental part of life and during the 20th century several new ways for communication has been developed and created. From the first telegraph which made it possible to send messages over long distances to radio communication and the telephone. In the last decades, computer to computer communication at high speed has become increasingly important, and so also the need for understanding computer communication. Since data communication today works in speeds that are so high...

  4. Implementing and developing cloud computing applications

    CERN Document Server

    Sarna, David E Y

    2010-01-01

    From small start-ups to major corporations, companies of all sizes have embraced cloud computing for the scalability, reliability, and cost benefits it can provide. It has even been said that cloud computing may have a greater effect on our lives than the PC and dot-com revolutions combined.Filled with comparative charts and decision trees, Implementing and Developing Cloud Computing Applications explains exactly what it takes to build robust and highly scalable cloud computing applications in any organization. Covering the major commercial offerings available, it provides authoritative guidan

  5. Implementing regularization implicitly via approximate eigenvector computation

    OpenAIRE

    Mahoney, Michael W.; Orecchia, Lorenzo

    2010-01-01

    Regularization is a powerful technique for extracting useful information from noisy data. Typically, it is implemented by adding some sort of norm constraint to an objective function and then exactly optimizing the modified objective function. This procedure often leads to optimization problems that are computationally more expensive than the original problem, a fact that is clearly problematic if one is interested in large-scale applications. On the other hand, a large body of empirical work...

  6. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  7. Quantum computing implementations with neutral particles

    DEFF Research Database (Denmark)

    Negretti, Antonio; Treutlein, Philipp; Calarco, Tommaso

    2011-01-01

    We review quantum information processing with cold neutral particles, that is, atoms or polar molecules. First, we analyze the best suited degrees of freedom of these particles for storing quantum information, and then we discuss both single- and two-qubit gate implementations. We focus our...... discussion mainly on collisional quantum gates, which are best suited for atom-chip-like devices, as well as on gate proposals conceived for optical lattices. Additionally, we analyze schemes both for cold atoms confined in optical cavities and hybrid approaches to entanglement generation, and we show how...... optimal control theory might be a powerful tool to enhance the speed up of the gate operations as well as to achieve high fidelities required for fault tolerant quantum computation....

  8. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  9. Computational procedures for implementing the optimal control ...

    African Journals Online (AJOL)

    The Extended Conjugate Gradient Method, ECGM, [1] was used to compute the control and state gradients of the unconstrained optimal control problem for higher-order nondispersive wave. Also computed are the descent directions for both the control and the state variables. These functions are the most important ...

  10. Method for implementation of recursive hierarchical segmentation on parallel computers

    Science.gov (United States)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  11. Software Defined Radio Datalink Implementation Using PC-Type Computers

    National Research Council Canada - National Science Library

    Zafeiropoulos, Georgios

    2003-01-01

    The objective of this thesis was to examine the feasibility of implementation and the performance of a Software Defined Radio datalink, using a common PC type host computer and a high level programming language...

  12. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  13. Computers in schools: implementing for sustainability. Why the truth ...

    African Journals Online (AJOL)

    This study investigates influences on the sustainability of a computers-in-schools project during the implementation phase thereof. The Computer Assisted Learning in Schools (CALIS) Project (1992–1996) is the unit of analysis. A qualitative case study research design is used to elicit data, in the form of participant ...

  14. Implementation of DFT application on ternary optical computer

    Science.gov (United States)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  15. Design and implementation of a local computer network

    Energy Technology Data Exchange (ETDEWEB)

    Fortune, P. J.; Lidinsky, W. P.; Zelle, B. R.

    1977-01-01

    An intralaboratory computer communications network was designed and is being implemented at Argonne National Laboratory. Parameters which were considered to be important in the network design are discussed; and the network, including its hardware and software components, is described. A discussion of the relationship between computer networks and distributed processing systems is also presented. The problems which the network is designed to solve and the consequent network structure represent considerations which are of general interest. 5 figures.

  16. Faculty of Education Students' Computer Self-Efficacy Beliefs and Their Attitudes towards Computers and Implementing Computer Supported Education

    Science.gov (United States)

    Berkant, Hasan Güner

    2016-01-01

    This study investigates faculty of education students' computer self-efficacy beliefs and their attitudes towards computers and implementing computer supported education. This study is descriptive and based on a correlational survey model. The final sample consisted of 414 students studying in the faculty of education of a Turkish university. The…

  17. Implementing and assessing computational modeling in introductory mechanics

    CERN Document Server

    Caballero, Marcos D; Schatz, Michael F

    2011-01-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term 1357 students in this course solved a suite of fourteen computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated in a proctored environment using a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation and the implications for instruction in computational modeling in introductory STEM courses.

  18. Learning Computer Programming: Implementing a Fractal in a Turing Machine

    Science.gov (United States)

    Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.

    2010-01-01

    It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…

  19. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  20. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  1. Computational Toxicology as Implemented by the US EPA ...

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  2. Macro Monte Carlo: Clinical Implementation in a Distributed Computing Environment

    Science.gov (United States)

    Neuenschwander, H.; Volken, W.; Frei, D.; Cris, C.; Born, E.; Mini, R.

    The Monte Carlo (MC) method is the most accurate method for the calculation of dose distributions in radiotherapy treatment planning (RTP) for high energy electron beams, if the source of electrons and the patient geometry can be accurately modeled and a sufficiently large number of electron histories are simulated. Due to the long calculation times, MC methods have long been considered as impractical for clinical use. Two main advances have improved the situation and made clinical MC RTP feasible: The development of highly specialized radiotherapy MC systems, and the ever-falling price/performance ratio of computer hardware. Moreover, MC dose calculation codes can easily be parallelized, which allows their implementation as distributed computing systems in networked departments. This paper describes the implementation and clinical validation of the Macro Monte Carlo (MMC) method, a fast method for clinical electron beam treatment planning.

  3. Implementation of computer assisted assessment: lessons from the literature

    OpenAIRE

    Sim, Gavin; Holifield, Phil; Brown, Martin

    2004-01-01

    This paper draws attention to literature surrounding the subject of computer-assisted assessment (CAA). A brief overview of traditional methods of assessment is presented, highlighting areas of concern in existing techniques. CAA is then defined, and instances of its introduction in various educational spheres are identified, with the main focus of the paper concerning the implementation of CAA. Through referenced articles, evidence is offered to inform practitioners, and direct further resea...

  4. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  5. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  6. WISDOM: A prototype office implementation of the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Eckert, D.W.

    1991-01-01

    The Savannah River Site has historically allowed the purchase of IBM MS-DOS and Apple Macintosh computers based on user request. As workgroup file services are implemented on the Local Area Network users desire to share data to a greater extent. This has resulted in mixed groups who now wish to share data files cleanly among dissimilar operating systems. WISDOM was designed as a system of network services, workstation platform standards, installation procedures, and application choices which would address data integration from the user perspective. Novell Netware provides a basis for file transfer, while Microsoft Windows supplies the GUI necessary to compliment the Macintosh. Central administration, networking protocols, host connectivity, and memory management restrictions required imaginative solutions. This paper describes the current status of the 500-workstation prototype; user acceptance and training; and outstanding issues to be addressed. Details are given on the design philosophy, some of the technology utilized, the implementation process, and future directions.

  7. WISDOM: A prototype office implementation of the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Eckert, D.W.

    1991-12-31

    The Savannah River Site has historically allowed the purchase of IBM MS-DOS and Apple Macintosh computers based on user request. As workgroup file services are implemented on the Local Area Network users desire to share data to a greater extent. This has resulted in mixed groups who now wish to share data files cleanly among dissimilar operating systems. WISDOM was designed as a system of network services, workstation platform standards, installation procedures, and application choices which would address data integration from the user perspective. Novell Netware provides a basis for file transfer, while Microsoft Windows supplies the GUI necessary to compliment the Macintosh. Central administration, networking protocols, host connectivity, and memory management restrictions required imaginative solutions. This paper describes the current status of the 500-workstation prototype; user acceptance and training; and outstanding issues to be addressed. Details are given on the design philosophy, some of the technology utilized, the implementation process, and future directions.

  8. Implementation of computer assisted assessment: lessons from the literature

    Directory of Open Access Journals (Sweden)

    Gavin Sim

    2004-12-01

    Full Text Available This paper draws attention to literature surrounding the subject of computer-assisted assessment (CAA. A brief overview of traditional methods of assessment is presented, highlighting areas of concern in existing techniques. CAA is then defined, and instances of its introduction in various educational spheres are identified, with the main focus of the paper concerning the implementation of CAA. Through referenced articles, evidence is offered to inform practitioners, and direct further research into CAA from a technological and pedagogical perspective. This includes issues relating to interoperability of questions, security, test construction and testing higher cognitive skills. The paper concludes by suggesting that an institutional strategy for CAA coupled with staff development in test construction for a CAA environment can increase the chances of successful implementation.

  9. Implementation of Computer Assisted Test Selection System in Local Governments

    Directory of Open Access Journals (Sweden)

    Abdul Azis Basri

    2016-05-01

    Full Text Available As an evaluative way of selection of civil servant system in all government areas, Computer Assisted Test selection system was started to apply in 2013. In phase of implementation for first time in all areas in 2014, this system selection had trouble in several areas, such as registration procedure and passing grade. The main objective of this essay was to describe implementation of new selection system for civil servants in the local governments and to seek level of effectiveness of this selection system. This essay used combination of study literature and field survey which data collection was made by interviews, observations, and documentations from various sources, and to analyze the collected data, this essay used reduction, display data and verification for made the conclusion. The result of this essay showed, despite there a few parts that be problem of this system such as in the registration phase but almost all phases of implementation of CAT selection system in local government areas can be said was working clearly likes in preparation, implementation and result processing phase. And also this system was fulfilled two of three criterias of effectiveness for selection system, they were accuracy and trusty. Therefore, this selection system can be said as an effective way to select new civil servant. As suggestion, local governments have to make prime preparation in all phases of test and make a good feedback as evaluation mechanism and together with central government to seek, fix and improve infrastructures as supporting tool and competency of local residents.

  10. Precision Medicine and PET/Computed Tomography: Challenges and Implementation.

    Science.gov (United States)

    Subramaniam, Rathan M

    2017-01-01

    Precision Medicine is about selecting the right therapy for the right patient, at the right time, specific to the molecular targets expressed by disease or tumors, in the context of patient's environment and lifestyle. Some of the challenges for delivery of precision medicine in oncology include biomarkers for patient selection for enrichment-precision diagnostics, mapping out tumor heterogeneity that contributes to therapy failures, and early therapy assessment to identify resistance to therapies. PET/computed tomography offers solutions in these important areas of challenges and facilitates implementation of precision medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  12. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    Science.gov (United States)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  13. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  14. MUSE: computational aspects of a GUM supplement 1 implementation

    Science.gov (United States)

    Müller, Martin; Wolf, Marco; Rösslein, Matthias

    2008-10-01

    The new guideline GUM Supplement 1—Propagation of Distributions Using a Monte Carlo Method (GS1) is currently published by JCGM/WG1. It describes an approximate method to calculate the measurement uncertainty in nearly all areas of metrology. In this way it overcomes the various limitations and drawbacks of the uncertainty propagation detailed in GUM. However, GS1 demands a software implementation in contrast to the uncertainty propagation. Therefore we have developed a software tool called MUSE (Measurement Uncertainty Simulation and Evaluation), which is a comprehensive implementation of GS1. In this paper we present the major computational aspects of the software which are the sampling from probability density functions (PDFs), an efficient way to propagate the PDFs with the help of a block design through the equation of the measurand and the calculation of the summarizing parameters based on these blocks. Also the different quality measures which are in place during the life cycle of the tool are elaborated.

  15. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Bly, Aaron

    2017-06-01

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based procedure in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker

  16. Computer arithmetic and validity theory, implementation, and applications

    CERN Document Server

    Kulisch, Ulrich

    2013-01-01

    This is the revised and extended second edition of the successful basic book on computer arithmetic. It is consistent with the newest recent standard developments in the field. The book shows how the arithmetic capability of the computer can be enhanced. The work is motivated by the desire and the need to improve the accuracy of numerical computing and to control the quality of the computed results (validity). The accuracy requirements for the elementary floating-point operations are extended to the customary product spaces of computations including interval spaces. The mathematical properties

  17. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...

  18. An Exploratory Study of the Implementation of Computer Technology in an American Islamic Private School

    Science.gov (United States)

    Saleem, Mohammed M.

    2009-01-01

    This exploratory study of the implementation of computer technology in an American Islamic private school leveraged the case study methodology and ethnographic methods informed by symbolic interactionism and the framework of the Muslim Diaspora. The study focused on describing the implementation of computer technology and identifying the…

  19. Computer Implementation of the Two-Factor DP Model for ...

    African Journals Online (AJOL)

    A computer program known as Program Simplex which takes advantage of this sparseness has been applied to obtain an optimal solution to the manpower planning problem presented. It has also been observed that LP models with few nonzero coefficients can easily be solved by using a computer to obtain an optimal ...

  20. Secure Cloud Computing Implementation Study For Singapore Military Operations

    Science.gov (United States)

    2016-09-01

    Computing in Healthcare. Adapted from [13]. Benefits Cloud Computing  Clinical Research  Electronic Medical Records  Collaboration Solutions...medium to send orders to tactical action units, the cloud should also contain a feature to verify that the action units have received and understood the...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. SECURE CLOUD

  1. Implementing Handheld Computers as Tools for First-Grade Writers

    Science.gov (United States)

    Kuhlman, Wilma D.; Danielson, Kathy Everts; Campbell, Elizabeth J.; Topp, Neal W.

    2006-01-01

    All humans use objects in their environment as tools for actions. Some tools are more useful than others for certain people and populations. This paper describes how different first-graders used handheld computers as tools when writing. While all 17 children in the observed classroom were competent users of their handheld computers, their use of…

  2. Projecting Grammatical Features in Nominals: Cognitive Processing Theory & Computational Implementation

    Science.gov (United States)

    2010-03-01

    functionality and plausibility distinguishes this research from most research in computational linguistics and computational psycholinguistics . The... Psycholinguistic Theory There is extensive psycholinguistic evidence that human language processing is essentially incremental and interactive...challenges of psycholinguistic research is to explain how humans can process language effortlessly and accurately given the complexity and ambiguity that is

  3. Implementation of Keystroke Dynamics for Authentication in Computer Systems

    Directory of Open Access Journals (Sweden)

    S. V. Skuratov

    2010-06-01

    Full Text Available Implementation of keystroke dynamics in multifactor authentication systems is described in the article. Original access control system based on totality of matchers is presented. Testing results and useful recommendations are also adduced.

  4. Implementation of Cloud Computing into VoIP

    Directory of Open Access Journals (Sweden)

    Floriana GEREA

    2012-08-01

    Full Text Available This article defines Cloud Computing and highlights key concepts, the benefits of using virtualization, its weaknesses and ways of combining it with classical VoIP technologies applied to large scale businesses. The analysis takes into consideration management strategies and resources for better customer orientation and risk management all for sustaining the Service Level Agreement (SLA. An important issue in cloud computing can be security and for this reason there are several security solution presented.

  5. Naval Computer-Based Instruction: Cost, Implementation and Effectiveness Issues.

    Science.gov (United States)

    1988-03-01

    become applicable technology and begin to be accepted on the market is between twenty-five and thirty- five years." (Drucker, P.F., 1985, p. 110) For...Daniel J., "Microcomputer Videogame Based Training." Educational Technology, Vol. 24, No. 2, February 1984. Galagan, Patricia, "Computers and Training...Computer-Based Training", Training and Development Journal, Vol. 38, No. 7, July 1984. Mascioni, Michael, "CD-I in the Business Market ." CD- ROM Review

  6. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Science.gov (United States)

    2011-08-22

    ... Assumption Buster Workshop: ``Current Implementations of Cloud Computing Indicate a New Approach to Security...: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data and computation. Cloud is a platform...

  7. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  8. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  9. Computer based learning in general practice--options and implementation.

    Science.gov (United States)

    Mills, K A; McGlade, K

    1992-01-01

    A survey of the 30 departments of general practice in the UK revealed that only three are currently making use of any form of computer based learning materials for teaching their undergraduate students. One of the reasons for the low level of usage is likely to be the relatively poor availability of suitable courseware and lack of guidance as to how to utilise what is available. This short paper describes the types of courseware that are available and the advantages and disadvantages of using acquired courseware as opposed to writing your own. It also considers alternative strategies for making computer based learning (CBL) courseware available to students.

  10. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  11. A Placement Test for Computer Science: Design, Implementation, and Analysis

    Science.gov (United States)

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok; Lang, Jeff

    2006-01-01

    An introductory CS1 course presents problems for educators and students due to students' diverse background in programming knowledge and exposure. Students who enroll in CS1 also have different expectations and motivations. Prompted by the curricular guidelines for undergraduate programmes in computer science released in 2001 by the ACM/IEEE, and…

  12. Implementation of Computer Assisted Assessment: Lessons from the Literature

    Science.gov (United States)

    Sim, Gavin; Holifield, Phil; Brown, Martin

    2004-01-01

    This paper draws attention to literature surrounding the subject of computer-assisted assessment (CAA). A brief overview of traditional methods of assessment is presented, highlighting areas of concern in existing techniques. CAA is then defined, and instances of its introduction in various educational spheres are identified, with the main focus…

  13. Public policy and regulatory implications for the implementation of Opportunistic Cloud Computing Services for Enterprises

    OpenAIRE

    Kuada, Eric; Olesen, Henning; Henten, Anders

    2012-01-01

    Opportunistic Cloud Computing Services (OCCS) is a social network approach to the provisioning and management of cloud computing services for enterprises. This paper discusses how public policy and regulations will impact on OCCS implementation. We rely on documented publicly available government and corporate policies on the adoption of cloud computing services and deduce the impact of these policies on their adoption of opportunistic cloud computing services. We conclude that there are regu...

  14. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  15. Cloud Computing Implementation Organizational Success in the Department of Defense

    Science.gov (United States)

    2014-03-27

    models is not apparent, understanding the theory is germane to the research. As Pfeffer (1982) postulates, an individual "achieves membership and...and implement powerful guidance/standards to force the issue. Despite numerous senior leader decrees, directives and orders, organizations still think... Understanding this research and its ramifications, if the models are executed, inevitably leads an organization to investigate the "best practices" for

  16. Essentials of interactive computer graphics concepts and implementation

    CERN Document Server

    Sung, Kelvin; Baer, Steven

    2008-01-01

    This undergraduate-level computer graphics text provides the reader with conceptual and practical insights into how to approach building a majority of the interactive graphics applications they encounter daily. As each topic is introduced, students are guided in developing a software library that will support fast prototyping of moderately complex applications using a variety of APIs, including OpenGL and DirectX.

  17. REXOR Rotorcraft Simulation Model. Volume 2. Computer Implementation

    Science.gov (United States)

    1976-07-01

    commands rather than trim error balance sources. The control system can operate with a number of different configurations. A hard swashplate and...flexible swashplate - external control gyro configu- rations are computed directly in the subroutine FLY. For the isolated in- ternal gyro system (Lockheed...meaning. The QFG array is assembled from blade data (also in the F array mentioned), data from LOADS and swashplate loads. The latter are developed within

  18. Public policy and regulatory implications for the implementation of Opportunistic Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning; Henten, Anders

    2012-01-01

    Opportunistic Cloud Computing Services (OCCS) is a social network approach to the provisioning and management of cloud computing services for enterprises. This paper discusses how public policy and regulations will impact on OCCS implementation. We rely on documented publicly available government...... and corporate policies on the adoption of cloud computing services and deduce the impact of these policies on their adoption of opportunistic cloud computing services. We conclude that there are regulatory challenges on data protection that raises issues for cloud computing adoption in general; and the lack...... of a single globally accepted data protection standard poses some challenges for very successful implementation of OCCS for companies. However, the direction of current public and corporate policies on cloud computing make a good case for them to try out opportunistic cloud computing services....

  19. Implementation of Fog Computing for Reliable E-Health Applications

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Mihaylov, Mihail Rumenov

    2015-01-01

    This paper addresses the current technical challenge of an impedance mismatch between the requirements of smart connected object applications within the sensing environment and the characteristics of today’s cloud infrastructure. This research work investigates the possibility to offload cloud...... tasks, such as storage and data signal processing to the edge of the network, thus decreasing the latency associated with performing those tasks within the cloud. The research scenario is an e-Health laboratory implementation where the real-time processing is performed by the home PC, while...... the extracted metadata is sent to the cloud for further processing...

  20. Prolog as description and implementation language in computer science teaching

    DEFF Research Database (Denmark)

    Christiansen, Henning

    be extended in straightforward ways into tools such as analyzers, tracers and debuggers. Experience shows a high learning curve, especially when the principles are complemented with a learning-by-doing approach having the students to develop such descriptions themselves from an informal introduction.......Prolog is a powerful pedagogical instrument for theoretical elements of computer science when used as combined description language and experimentation tool. A teaching methodology based on this principle has been developed and successfully applied in a context with a heterogeneous student...

  1. Three-Dimensional Field-Scale Coupled Thermo-Hydro-Mechanical Modeling: Parallel Computing Implementation

    OpenAIRE

    Vardon, Philip James; Cleall, Peter John; Thomas, Hywel Rhys; Philp, Roger Norman; Banicescu, Ioana

    2011-01-01

    An approach for the simulation of three-dimensional field-scale coupled thermo-hydro-mechanical problems is presented, including the implementation of parallel computation algorithms. The approach is designed to allow three-dimensional large-scale coupled simulations to be undertaken in reduced time. Owing to progress in computer technology, existing parallel implementations have been found to be ineffective, with the time taken for communication dominating any reduction in time gained by spl...

  2. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  3. Design and Implementation of a Computational Lexicon for Turkish

    CERN Document Server

    Yorulmaz, A K

    1999-01-01

    All natural language processing systems (such as parsers, generators, taggers) need to have access to a lexicon about the words in the language. This thesis presents a lexicon architecture for natural language processing in Turkish. Given a query form consisting of a surface form and other features acting as restrictions, the lexicon produces feature structures containing morphosyntactic, syntactic, and semantic information for all possible interpretations of the surface form satisfying those restrictions. The lexicon is based on contemporary approaches like feature-based representation, inheritance, and unification. It makes use of two information sources: a morphological processor and a lexical database containing all the open and closed-class words of Turkish. The system has been implemented in SICStus Prolog as a standalone module for use in natural language processing applications.

  4. FSL-based Hardware Implementation for Parallel Computation of cDNA Microarray Image Segmentation

    OpenAIRE

    Bogdan Bot; Simina Emerich; Sorin Martoiu; Bogdan Belean

    2015-01-01

    The present paper proposes a FPGA based hardware implementations for microarray image processing algorithms in order eliminate the shortcomings of the existing software platforms: user intervention, increased computation time and cost. The proposed image processing algorithms exclude user intervention from processing. An application-specific architecture is designed aiming microarray image processing algorithms parallelization in order to speed up computation. Hardware architectures for logar...

  5. Understanding underspecification: A comparison of two computational implementations.

    Science.gov (United States)

    Logačev, Pavel; Vasishth, Shravan

    2016-01-01

    Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201-216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558-592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.'s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between

  6. Short-term effects of implemented high intensity shoulder elevation during computer work

    OpenAIRE

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; S?gaard, Karen; Holtermann, Andreas

    2009-01-01

    Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during c...

  7. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  8. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  9. Short-term effects of implemented high intensity shoulder elevation during computer work.

    Science.gov (United States)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal; Olsen, Henrik B; Søgaard, Karen; Holtermann, Andreas

    2009-08-10

    Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min) measured, and bipolar surface electromyography (EMG) recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius part during the subsequent pause from computer work (p shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a pause with preceding high intensity contraction requires further investigation before high

  10. Short-term effects of implemented high intensity shoulder elevation during computer work

    Directory of Open Access Journals (Sweden)

    Madeleine Pascal

    2009-08-01

    Full Text Available Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. Methods 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min measured, and bipolar surface electromyography (EMG recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. Results The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular trapezius part during the subsequent pause from computer work (p Conclusion Since a preceding high intensity shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a

  11. Analysis and selection of optimal function implementations in massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  12. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    Science.gov (United States)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  13. Patent law for computer scientists steps to protect computer-implemented inventions

    CERN Document Server

    Closa, Daniel; Giemsa, Falk; Machek, Jörg

    2010-01-01

    Written from over 70 years of experience, this overview explains patent laws across Europe, the US and Japan, and teaches readers how to think from a patent examiner's perspective. Over 10 detailed case studies are presented from different computer science applications.

  14. Computed tomography coronary angiography with heart rate control premedication: a best practice implementation project.

    Science.gov (United States)

    Mander, Gordon Thomas Waterland

    2017-07-01

    Computed tomography coronary angiography patient preparation with heart rate control premedication is employed in departments across Australia. However, the methods of administration vary widely between institutions and do not always follow best practice. This aim of the study was to identify and promote best practice in the administration of heart rate premedication in computed tomography coronary angiography at a regional hospital in Australia. The Joanna Briggs Institute have validated audit and feedback tools to assist with best practice implementation projects. This project used these tools, which involve three phases of activity - a pre-implementation audit, reflecting on results and implementing strategies to address non-compliance, and a post-implementation audit to assess the outcomes. A baseline audit identified non-compliance in the majority of measured audit criteria. Following implementation of an institution-specific guideline and associated worksheet, improved compliance was shown across all audit criteria. Following the development and implementation of institution-specific evidence-based resources relating to heart rate control in computed tomography coronary angiography, a high level of compliance consistent with best practice was achieved.

  15. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA.

    Science.gov (United States)

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B; Day, Philip J; King, Ross D

    2017-03-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman 'there's plenty of room at the bottom'. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world

  16. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    Science.gov (United States)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  17. Computational implementation of the multi-mechanism deformation coupled fracture model for salt

    Energy Technology Data Exchange (ETDEWEB)

    Koteras, J.R.; Munson, D.E.

    1996-05-01

    The Multi-Mechanism Deformation (M-D) model for creep in rock salt has been used in three-dimensional computations for the Waste Isolation Pilot Plant (WIPP), a potential waste, repository. These computational studies are relied upon to make key predictions about long-term behavior of the repository. Recently, the M-D model was extended to include creep-induced damage. The extended model, the Multi-Mechanism Deformation Coupled Fracture (MDCF) model, is considerably more complicated than the M-D model and required a different technology from that of the M-D model for a computational implementation.

  18. Implementation fidelity of a computer-assisted intervention for children with speech sound disorders.

    Science.gov (United States)

    McCormack, Jane; Baker, Elise; Masso, Sarah; Crowe, Kathryn; McLeod, Sharynne; Wren, Yvonne; Roulstone, Sue

    2017-06-01

    Implementation fidelity refers to the degree to which an intervention or programme adheres to its original design. This paper examines implementation fidelity in the Sound Start Study, a clustered randomised controlled trial of computer-assisted support for children with speech sound disorders (SSD). Sixty-three children with SSD in 19 early childhood centres received computer-assisted support (Phoneme Factory Sound Sorter [PFSS] - Australian version). Educators facilitated the delivery of PFSS targeting phonological error patterns identified by a speech-language pathologist. Implementation data were gathered via (1) the computer software, which recorded when and how much intervention was completed over 9 weeks; (2) educators' records of practice sessions; and (3) scoring of fidelity (intervention procedure, competence and quality of delivery) from videos of intervention sessions. Less than one-third of children received the prescribed number of days of intervention, while approximately one-half participated in the prescribed number of intervention plays. Computer data differed from educators' data for total number of days and plays in which children participated; the degree of match was lower as data became more specific. Fidelity to intervention procedures, competency and quality of delivery was high. Implementation fidelity may impact intervention outcomes and so needs to be measured in intervention research; however, the way in which it is measured may impact on data.

  19. New Media Resistance: Barriers to Implementation of Computer Video Games in the Classroom

    Science.gov (United States)

    Rice, John W.

    2007-01-01

    Computer video games are an emerging instructional medium offering strong degrees of cognitive efficiencies for experiential learning, team building, and greater understanding of abstract concepts. As with other new media adopted for use by instructional technologists for pedagogical purposes, barriers to classroom implementation have manifested…

  20. 76 FR 36986 - Export Controls for High Performance Computers: Wassenaar Arrangement Agreement Implementation...

    Science.gov (United States)

    2011-06-24

    ... understanding of the risks associated with the transfers of these items. For more information on the Wassenaar... Bureau of Industry and Security 15 CFR Parts 734, 740, 743 and 774 RIN 0694-AF15 Export Controls for High Performance Computers: Wassenaar Arrangement Agreement Implementation for ECCN 4A003 and Revisions to License...

  1. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mathematics; Anderson, D.R. [Sandia National Labs., Albuquerque, NM (United States). WIPP Performance Assessments Departments; Baker, B.L. [Technadyne Engineering Consultants, Albuquerque, NM (United States)] [and others

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs.

  2. Supporting Struggling Writers with Class-Wide Teacher Implementation of a Computer-Based Graphic Organizer

    Science.gov (United States)

    Regan, Kelley; Evmenova, Anya S.; Boykin, Andrea; Sacco, Donna; Good, Kevin; Ahn, Soo Y.; MacVittie, Nichole; Hughes, Melissa D.

    2017-01-01

    Following professional development, 4 teachers implemented instructional lessons designed to improve the written expression of 6th- and 7th-grade struggling writers in inclusive, self-contained, and co-taught classrooms. A multiple-baseline study investigated the effects of a computer-based graphic organizer (CBGO) with embedded self-regulated…

  3. How to Implement Rigorous Computer Science Education in K-12 Schools? Some Answers and Many Questions

    Science.gov (United States)

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.

    2015-01-01

    Aiming to collect various concepts, approaches, and strategies for improving computer science education in K-12 schools, we edited this second special issue of the "ACM TOCE" journal. Our intention was to collect a set of case studies from different countries that would describe all relevant aspects of specific implementations of…

  4. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  5. The computational implementation of the landscape model: modeling inferential processes and memory representations of text comprehension.

    Science.gov (United States)

    Tzeng, Yuhtsuen; van den Broek, Paul; Kendeou, Panayiota; Lee, Chengyuan

    2005-05-01

    The complexity of text comprehension demands a computational approach to describe the cognitive processes involved. In this article, we present the computational implementation of the landscape model of reading. This model captures both on-line comprehension processes during reading and the off-line memory representation after reading is completed, incorporating both memory-based and coherence-based mechanisms of comprehension. The overall architecture and specific parameters of the program are described, and a running example is provided. Several studies comparing computational and behavioral data indicate that the implemented model is able to account for cycle-by-cycle comprehension processes and memory for a variety of text types and reading situations.

  6. The efficient implementation of correction procedure via reconstruction with GPU computing

    Science.gov (United States)

    Zimmerman, Ben J.

    Computational fluid dynamics (CFD) has long been a useful tool to model fluid flow problems across many engineering disciplines, and while problem size, complexity, and difficulty continue to expand, the demands for robustness and accuracy grow. Furthermore, generating high-order accurate solutions has escalated the required computational resources, and as problems continue to increase in complexity, so will computational needs such as memory requirements and calculation time for accurate flow field prediction. To improve upon computational time, vast amounts of computational power and resources are employed, but even over dozens to hundreds of central processing units (CPUs), the required computational time to formulate solutions can be weeks, months, or longer, which is particularly true when generating high-order accurate solutions over large computational domains. One response to lower the computational time for CFD problems is to implement graphical processing units (GPUs) with current CFD solvers. GPUs have illustrated the ability to solve problems orders of magnitude faster than their CPU counterparts with identical accuracy. The goal of the presented work is to combine a CFD solver and GPU computing with the intent to solve complex problems at a high-order of accuracy while lowering the computational time required to generate the solution. The CFD solver should have high-order spacial capabilities to evaluate small fluctuations and fluid structures not generally captured by lower-order methods and be efficient for the GPU architecture. This research combines the high-order Correction Procedure via Reconstruction (CPR) method with compute unified device architecture (CUDA) from NVIDIA to reach these goals. In addition, the study demonstrates accuracy of the developed solver by comparing results with other solvers and exact solutions. Solving CFD problems accurately and quickly are two factors to consider for the next generation of solvers. GPU computing is a

  7. Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest

    CERN Document Server

    Felkel, P; Wegenkittl, R; Felkel, Petr; Bruckwschwaiger, Mario; Wegenkittl, Rainer

    2001-01-01

    The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an iterative forest transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of...

  8. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  9. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  10. Implementing the UCSD PASCAL system on the MODCOMP computer. [deep space network

    Science.gov (United States)

    Wolfe, T.

    1980-01-01

    The implementation of an interactive software development system (UCSD PASCAL) on the MODCOMP computer is discussed. The development of an interpreter for the MODCOMP II and the MODCOMP IV computers, written in MODCOMP II assembly language, is described. The complete Pascal programming system was run successfully on a MODCOMP II and MODCOMP IV under both the MAX II/III and MAX IV operating systems. The source code for an 8080 microcomputer version of the interpreter was used as the design for the MODCOMP interpreter. A mapping of the functions within the 8080 interpreter into MODCOMP II assembly language was the method used to code the interpreter.

  11. Portable tongue-supported human computer interaction system design and implementation.

    Science.gov (United States)

    Quain, Rohan; Khan, Masood Mehmood

    2014-01-01

    Tongue supported human-computer interaction (TSHCI) systems can help critically ill patients interact with both computers and people. These systems can be particularly useful for patients suffering injuries above C7 on their spinal vertebrae. Despite recent successes in their application, several limitations restrict performance of existing TSHCI systems and discourage their use in real life situations. This paper proposes a low-cost, less-intrusive, portable and easy to use design for implementing a TSHCI system. Two applications of the proposed system are reported. Design considerations and performance of the proposed system are also presented.

  12. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    Science.gov (United States)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  13. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    Science.gov (United States)

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  14. Computer simulations in teaching physics: Development and implementation of a hypermedia system for high school teachers

    Science.gov (United States)

    da Silva, A. M. R.; de Macêdo, J. A.

    2016-06-01

    On the basis of the technological advancement in the middle and the difficulty of learning by the students in the discipline of physics, this article describes the process of elaboration and implementation of a hypermedia system for high school teachers involving computer simulations for teaching basic concepts of electromagnetism, using free tool. With the completion and publication of the project there will be a new possibility of interaction of students and teachers with the technology in the classroom and in labs.

  15. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  16. Hardware and Software Implementations of Prim’s Algorithm for Efficient Minimum Spanning Tree Computation

    OpenAIRE

    Mariano, Artur; Lee, Dongwook; Gerstlauer, Andreas; Chiou, Derek

    2013-01-01

    Part 4: Performance Analysis; International audience; Minimum spanning tree (MST) problems play an important role in many networking applications, such as routing and network planning. In many cases, such as wireless ad-hoc networks, this requires efficient high-performance and low-power implementations that can run at regular intervals in real time on embedded platforms. In this paper, we study custom software and hardware realizations of one common algorithm for MST computations, Prim’s alg...

  17. Report from the Trenches - Implementing Curriculum to Promote the Participation of Women in Computer Science

    Science.gov (United States)

    Jessup, Elizabeth; Sumner, Tamara; Barker, Lecia

    Many social scientists conduct research on increasing the participation of women in computing, yet it is often computer scientists who must find ways of implementing those findings into concrete actions. Technology for Community is an undergraduate computer science course taught at the University of Colorado at Boulder in which students work with local community service agencies building computational solutions to problems confronting those agencies. Although few Computer Science majors are female, this course has consistently attracted a very large proportion of female students. Technology for Community enrollment patterns and course curriculum are compared with other computer science courses over a 3-year period. All courses that satisfy public markers of design-based learning are seen to have higher than average female enrollment. Design-based learning integrates four practices believed to increase participation of women -- authentic learning context, collaborative assessment, knowledge sharing among students, and the humanizing of technology. Of all the courses marked as including design-based learning, however, the Technology for Community course is drawing the most significant numbers of women from outside of the College of Engineering and Applied Science. We attribute that success to the inclusion in the course of curriculum reflecting design-based learning and recruiting partnerships with programs outside of the College of Engineering.

  18. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  19. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  20. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  1. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Science.gov (United States)

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  2. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    Science.gov (United States)

    Huang, Norden Eh (Inventor); Shen, Zheng (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  3. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    Science.gov (United States)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  4. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    Directory of Open Access Journals (Sweden)

    Iñaki Bildosola

    Full Text Available Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  5. COMPUTER EVALUATION OF SKILLS FORMATION QUALITY IN THE IMPLEMENTATION OF COMPETENCE-BASED APPROACH TO LEARNING

    Directory of Open Access Journals (Sweden)

    Vitalia A. Zhuravleva

    2014-01-01

    Full Text Available The article deals with the problem of effective organization of skills forming as an important part of the competence approach in education, implemented via educational standards of new generation. The solution of the problem suggests using of computer tools to assess the quality of skills formation and abilities based on the proposed model of the problem. This paper proposes an approach to creating an assessing model of the level of skills formation in knowledge management systems based on mathematical modeling methods. Attention is paid to the evaluation strategy and technology of assessment, which is based on the use of rules of fuzzy mathematics. Algorithmic implementation of the proposed model of evaluation of the quality of skills development is shown as well. 

  6. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    Science.gov (United States)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  7. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    Science.gov (United States)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  8. Prolog-based system for nursing staff scheduling implemented on a personal computer.

    Science.gov (United States)

    Okada, M; Okada, M

    1988-02-01

    An approach to the problem of nursing staff scheduling in a hospital is presented. For scheduling nurses, a variety of requirements with varied levels of significance has to be taken into account simultaneously. Because of the nature of the problem, where it is difficult to define what is the optimal solution in a strict sense, we aimed at automating scheduling by following the manual method in a faithful manner. A system for nurse scheduling has been implemented on a personal computer using Prolog. It determines favorable shift assignments on a day-to-day basis, referring to the information accumulated in the data base. In Prolog, various requirements can be expressed with relative ease, and the process of the manual method can be incorporated into the system in a natural way. The computer simulation has been conducted to test the system performance, and the obtained results demonstrated the validity of the approach.

  9. Delay-based reservoir computing: noise effects in a combined analog and digital implementation.

    Science.gov (United States)

    Soriano, Miguel C; Ortín, Silvia; Keuninckx, Lars; Appeltant, Lennert; Danckaert, Jan; Pesquera, Luis; van der Sande, Guy

    2015-02-01

    Reservoir computing is a paradigm in machine learning whose processing capabilities rely on the dynamical behavior of recurrent neural networks. We present a mixed analog and digital implementation of this concept with a nonlinear analog electronic circuit as a main computational unit. In our approach, the reservoir network can be replaced by a single nonlinear element with delay via time-multiplexing. We analyze the influence of noise on the performance of the system for two benchmark tasks: 1) a classification problem and 2) a chaotic time-series prediction task. Special attention is given to the role of quantization noise, which is studied by varying the resolution in the conversion interface between the analog and digital worlds.

  10. A Computationally Efficient and Robust Implementation of the Continuous-Discrete Extended Kalman Filter

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik

    2007-01-01

    We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...... differential equations....... are solved efficiently using an ESDIRK integrator with sensitivity analysis capabilities. This ESDIRK integrator for the mean- covariance evolution is implemented as part of an extended Kalman filter and tested on a PDE system. For moderate to large sized systems, the ESDIRK based extended Kalman filter...

  11. A few modeling and rendering techniques for computer graphics and their implementation on ultra hardware

    Science.gov (United States)

    Bidasaria, Hari

    1989-01-01

    Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.

  12. Sensory system for implementing a human-computer interface based on electrooculography.

    Science.gov (United States)

    Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena

    2011-01-01

    This paper describes a sensory system for implementing a human-computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes.

  13. EXPERIMENTAL AND THEORETICAL FOUNDATIONS AND PRACTICAL IMPLEMENTATION OF TECHNOLOGY BRAIN-COMPUTER INTERFACE

    Directory of Open Access Journals (Sweden)

    A. Ya. Kaplan

    2013-01-01

    Full Text Available Technology brain-computer interface (BCI allow saperson to learn how to control external devices via thevoluntary regulation of own EEG directly from the brain without the involvement in the process of nerves and muscles. At the beginning the main goal of BCI was to replace or restore motor function to people disabled by neuromuscular disorders. Currently, the task of designing the BCI increased significantly, more capturing different aspects of life a healthy person. This article discusses the theoretical, experimental and technological base of BCI development and systematized critical fields of real implementation of these technologies.

  14. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  15. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    Directory of Open Access Journals (Sweden)

    E. A. Panidi

    2016-01-01

    Full Text Available The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes some contemporary concepts and technologies used for distributed data storage and management, which provide combined use of server-side and client-side resources. In particular, the concepts of Cloud Computing, Fog Computing, and Internet of Things, also with Java Web Start, WebRTC and WebTorrent technologies are mentioned. The author's experience is described briefly, which incorporates the number of projects devoted to the development of the portable solutions for geospatial data and GIS software publication on the Global Network.

  16. Framework and implementation for improving physics essential skills via computer-based practice: Vector math

    Science.gov (United States)

    Mikula, Brendon D.; Heckler, Andrew F.

    2017-06-01

    We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with a careful identification of target skills and the study of specific student difficulties with these skills. It then employs computer-based instruction, immediate feedback, mastery grading, and well-researched principles from cognitive psychology such as interleaved training sequences and distributed practice. We implemented this with more than 1500 students over 2 semesters. Students completed the mastery practice for an average of about 13 min /week , for a total of about 2-3 h for the whole semester. Results reveal large (>1 SD ) pretest to post-test gains in accuracy in vector skills, even compared to a control group, and these gains were retained at least 2 months after practice. We also find evidence of improved fluency, student satisfaction, and that awarding regular course credit results in higher participation and higher learning gains than awarding extra credit. In all, we find that simple computer-based mastery practice is an effective and efficient way to improve a set of basic and essential skills for introductory physics.

  17. Implementation of the Deutsch-Jozsa algorithm on an ion-trap quantum computer

    Science.gov (United States)

    Gulde, Stephan; Riebe, Mark; Lancaster, Gavin P. T.; Becher, Christoph; Eschner, Jürgen; Häffner, Hartmut; Schmidt-Kaler, Ferdinand; Chuang, Isaac L.; Blatt, Rainer

    2003-01-01

    Determining classically whether a coin is fair (head on one side, tail on the other) or fake (heads or tails on both sides) requires an examination of each side. However, the analogous quantum procedure (the Deutsch-Jozsa algorithm) requires just one examination step. The Deutsch-Jozsa algorithm has been realized experimentally using bulk nuclear magnetic resonance techniques, employing nuclear spins as quantum bits (qubits). In contrast, the ion trap processor utilises motional and electronic quantum states of individual atoms as qubits, and in principle is easier to scale to many qubits. Experimental advances in the latter area include the realization of a two-qubit quantum gate, the entanglement of four ions, quantum state engineering and entanglement-enhanced phase estimation. Here we exploit techniques developed for nuclear magnetic resonance to implement the Deutsch-Jozsa algorithm on an ion-trap quantum processor, using as qubits the electronic and motional states of a single calcium ion. Our ion-based implementation of a full quantum algorithm serves to demonstrate experimental procedures with the quality and precision required for complex computations, confirming the potential of trapped ions for quantum computation.

  18. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    Science.gov (United States)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  19. Computational implementation of a tunable multicellular memory circuit for engineered eukaryotic consortia

    Directory of Open Access Journals (Sweden)

    Josep eSardanyés

    2015-10-01

    Full Text Available Cells are complex machines capable of processing information by means of an entangled network ofmolecular interactions. A crucial component of these decision-making systems is the presence of memoryand this is also a specially relevant target of engineered synthetic systems. A classic example of memorydevices is a 1-bit memory element known as the flip-flop. Such system can be in principle designed usinga single-cell implementation, but a direct mapping between standard circuit design and a living circuitcan be cumbersome. Here we present a novel computational implementation of a 1-bit memory deviceusing a reliable multicellular design able to behave as a set-reset flip-flop that could be implemented inyeast cells. The dynamics of the proposed synthetic circuit is investigated with a mathematical modelusing biologically-meaningful parameters. The circuit is shown to behave as a flip-flop in a wide range ofparameter values. The repression strength for the NOT logics is shown to be crucial to obtain a goodflip-flop signal. Our model also shows that the circuit can be externally tuned to achieve different memorystates and dynamics, such as persistent and transient memory. We have characterised the parameterdomains for robust memory storage and retrieval as well as the corresponding time response dynamics.

  20. Introductory Molecular Orbital Theory: An Honors General Chemistry Computational Lab as Implemented Using Three-Dimensional Modeling Software

    Science.gov (United States)

    Ruddick, Kristie R.; Parrill, Abby L.; Petersen, Richard L.

    2012-01-01

    In this study, a computational molecular orbital theory experiment was implemented in a first-semester honors general chemistry course. Students used the GAMESS (General Atomic and Molecular Electronic Structure System) quantum mechanical software (as implemented in ChemBio3D) to optimize the geometry for various small molecules. Extended Huckel…

  1. INTEGRATION OF ECONOMIC AND COMPUTER SKILLS AT IMPLEMENTATION OF STUDENTS PROJECT «BUSINESS PLAN PRODUCING IN MICROSOFT WORD»

    Directory of Open Access Journals (Sweden)

    Y.B. Samchinska

    2012-07-01

    Full Text Available In the article expedience at implementation of economic specialities by complex students project on Informatics and Computer Sciences is grounded on creation of business plan by modern information technologies, and also methodical recommendations are presented on implementation of this project.

  2. A META-MODELLING SERVICE PARADIGM FOR CLOUD COMPUTING AND ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    F. Cheng

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:Service integrators seek opportunities to align the way they manage resources in the service supply chain. Many business organisations can operate new, more flexible business processes that harness the value of a service approach from the customer’s perspective. As a relatively new concept, cloud computing and related technologies have rapidly gained momentum in the IT world. This article seeks to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions: service supply chain architecture from a service perspective; the basic clouds of service supply chain; managerial insights into these clouds; and the commercial value of implementing cloud computing. In particular, to show how those services can be used, and involved in their utilisation processes, a hypothetical meta-modelling service of cloud computing is proposed. Moreover, the paper defines the managed cloud architecture for a service vendor or service integrator in the cloud computing infrastructure in the service supply chain: IT services, business services, business processes, which create atomic and composite software services that are used to perform business processes with business service choreographies.

    AFRIKAANSE OPSOMMING: Diensintegreeders is op soek na geleenthede om die bestuur van hulpbronne in die diensketting te belyn. Talle organisasies kan nuwe, meer buigsame besigheidprosesse, wat die waarde van ‘n diensaanslag uit die kliënt se oogpunt inspan, gebruik. As ‘n relatiewe nuwe konsep het wolkberekening en verwante tegnologie vinnig momentum gekry in die IT-wêreld. Die artikel poog om lig te werp op kwessies van die diensketting wat verband hou met wolkberekening deur verskeie verwante vrae te ondersoek: dienkettingargitektuur uit ‘n diensoogpunt; die basiese wolk van die diensketting; bestuursinsigte oor sodanige wolke; en die kommersiële waarde van die implementering van

  3. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2--Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3--Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  4. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  5. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  6. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 - Robust Tools. Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 - Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 - Balanced Operational Infrastructure. Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  7. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    from one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: (1) Robust Tools - Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements; (2) Prediction through Simulation - Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile; and (3) Balanced Operational Infrastructure - Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  8. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    one that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2--Prediction through Simulation. Deliver validated physics and engineering tools to enable simulations of nuclear-weapons performances in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  9. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1 Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2 Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3 Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  10. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    that was very successful in delivering an initial capability to one that is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools. ASC must continue to meet three objectives: Objective 1. Robust Tools--Develop robust models, codes, and computational techniques to support stockpile needs such as refurbishments, SFIs, LEPs, annual assessments, and evolving future requirements. Objective 2. Prediction through Simulation--Deliver validated physics and engineering tools to enable simulations of nuclear weapons performance in a variety of operational environments and physical regimes and to enable risk-informed decisions about the performance, safety, and reliability of the stockpile. Objective 3. Balanced Operational Infrastructure--Implement a balanced computing platform acquisition strategy and operational infrastructure to meet Directed Stockpile Work (DSW) and SSP needs for capacity and high-end simulation capabilities.

  11. Describing different brain computer interface systems through a unique model: a UML implementation.

    Science.gov (United States)

    Quitadamo, Lucia Rita; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi

    2008-01-01

    All the protocols currently implemented in brain computer interface (BCI) experiments are characterized by different structural and temporal entities. Moreover, due to the lack of a unique descriptive model for BCI systems, there is not a standard way to define the structure and the timing of a BCI experimental session among different research groups and there is also great discordance on the meaning of the most common terms dealing with BCI, such as trial, run and session. The aim of this paper is to provide a unified modeling language (UML) implementation of BCI systems through a unique dynamic model which is able to describe the main protocols defined in the literature (P300, mu-rhythms, SCP, SSVEP, fMRI) and demonstrates to be reasonable and adjustable according to different requirements. This model includes a set of definitions of the typical entities encountered in a BCI, diagrams which explain the structural correlations among them and a detailed description of the timing of a trial. This last represents an innovation with respect to the models already proposed in the literature. The UML documentation and the possibility of adapting this model to the different BCI systems built to date, make it a basis for the implementation of new systems and a mean for the unification and dissemination of resources. The model with all the diagrams and definitions reported in the paper are the core of the body language framework, a free set of routines and tools for the implementation, optimization and delivery of cross-platform BCI systems.

  12. An implementation of a tree code on a SIMD, parallel computer

    Science.gov (United States)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  13. Implementation of a Hybrid Controller for Ventilation Control Using Soft Computing

    Energy Technology Data Exchange (ETDEWEB)

    Craig G. Rieger; D. Subbaram Naidu

    2005-06-01

    Many industrial facilities utilize pressure control gradients to prevent migration of hazardous species from containment areas to occupied zones, often using Proportional-Integral-Derivative (PID) control systems. When operators rebalance the facility, variation from the desired gradients can occur and the operating conditions can change enough that the PID parameters are no longer adequate to maintain a stable system. As the goal of the ventilation control system is to optimize the pressure gradients and associated flows for the facility, Linear Quadratic Tracking (LQT) is a method that provides a time-based approach to guiding facility interactions. However, LQT methods are susceptible to modeling and measurement errors, and therefore the additional use of Soft Computing methods are proposed for implementation to account for these errors and nonlinearities.

  14. An implementation of the partitioned Levenberg-Marquardt algorithm for applications in computer vision

    Directory of Open Access Journals (Sweden)

    Tiago Polizer da Silva

    2009-03-01

    Full Text Available At several applications of computer vision is necessary to estimate parameters for a specific model which best fits an experimental data set. For these cases, a minimization algorithm might be used and one of the most popular is the Levenberg-Marquardt algorithm. Although several free applies from this algorithm are available, any of them has great features when the resolution of problem has a sparse Jacobian matrix . In this case, it is possible to have a great reduce in the algorithm's complexity. This work presents a Levenberg-Marquardt algorithm implemented in cases which has a sparse Jacobian matrix. To illustrate this algorithm application, the camera calibration with 1D pattern is applied to solve the problem. Empirical results show that this method is able to figure out satisfactorily with few iterations, even with noise presence.

  15. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    Science.gov (United States)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  16. THE CONCEPT OF THE EDUCATIONAL COMPUTER MATHEMATICS SYSTEM AND EXAMPLES OF ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    M. Lvov

    2014-11-01

    Full Text Available The article deals with the educational computer mathematics system, based in Kherson State University and resulted in more than 8 software tools to orders of the Ministry of Education, Science, Youth and Sports of Ukraine. The exact and natural sciences are notable among all disciplines both in secondary schools and universities. They form the fundamental scientific knowledge, based on precise mathematical models and methods. The educational process for these courses should include not only lectures and seminars, but active forms of studying as well: practical classes, laboratory work, practical training, etc. The enumerated peculiarities determine specific intellectual and architectural properties of information technologies, developed to be used in the educational process of these disciplines. Whereas, in terms of technologies used in the implementation of the functionality of software, they are actually the educational computer algebra system. Thus the algebraic programming system APS developed in the Institute of Cybernetics of the National Academy of Sciences of Ukraine led by Academician O.A. Letychevskyi in the 80 years of the twentieth century is especially important for their development.

  17. On the formulation and computer implementation of an age-dependent two-sex demographic model.

    Science.gov (United States)

    Mode, C J; Salsburg, M A

    1993-12-01

    A two-sex age-dependent demographic model is formulated within the framework of a stochastic population process, including both time-homogeneous and time-inhomogeneous laws of evolution. An outline of the parametric components of the system, which expedite computer implementation and experimentation, is also given. New features of the model include a component for couple formation, using the class of Farlie-Morgenstern bivariate distributions to accommodate age preferences in selecting marriage partners, a component for couple dissolution due to separation or divorce, and an outline of techniques for initializing a two-sex projection given scanty information. For the case of time-homogeneous laws of evolution, stability properties of two-sex models that are analogs of those for one-sex models are difficult to prove mathematically due to nonlinearities. But computer experiments in this case suggest that these properties continue to hold for two-sex models for such widely used demographic indicators as period crude birth rates, period rates of natural increase, and period age distributions, which converge to constant forms in long-term projections. The values of the stable crude birth rate, rate of natural increase, and quantiles of the stable age distribution differ markedly among projections that differ only in selected values of parameters governing couple formation and dissolution. Such experimental results demonstrate that two-sex models are not merely intellectual curiosities but exist in their own right and lead to insights not attainable in simpler one-sex formulations.

  18. Developing a computer delivered, theory based intervention for guideline implementation in general practice

    Directory of Open Access Journals (Sweden)

    Ashworth Mark

    2010-11-01

    Full Text Available Abstract Background Non-adherence to clinical guidelines has been identified as a consistent finding in general practice. The purpose of this study was to develop theory-informed, computer-delivered interventions to promote the implementation of guidelines in general practice. Specifically, our aim was to develop computer-delivered prompts to promote guideline adherence for antibiotic prescribing in respiratory tract infections (RTIs, and adherence to recommendations for secondary stroke prevention. Methods A qualitative design was used involving 33 face-to-face interviews with general practitioners (GPs. The prompts used in the interventions were initially developed using aspects of social cognitive theory, drawing on nationally recommended standards for clinical content. The prompts were then presented to GPs during interviews, and iteratively modified and refined based on interview feedback. Inductive thematic analysis was employed to identify responses to the prompts and factors involved in the decision to use them. Results GPs reported being more likely to use the prompts if they were perceived as offering support and choice, but less likely to use them if they were perceived as being a method of enforcement. Attitudes towards using the prompts were also related to anticipated patient outcomes, individual prescriber differences, accessibility and presentation of prompts and acceptability of guidelines. Comments on the prompts were largely positive after modifying them based on participant feedback. Conclusions Acceptability and satisfaction with computer-delivered prompts to follow guidelines may be increased by working with practitioners to ensure that the prompts will be perceived as valuable tools that can support GPs' practice.

  19. Theoretical studies for experimental implementation of quantum computing with trapped ions

    Science.gov (United States)

    Yoshimura, Bryce T.

    Certain quantum many-body physics problems, such as the transverse field Ising model are intractable on a classical computer, meaning that as the number of particles grows, or spins, the amount of memory and computational time required to solve the problem exactly increases faster than a polynomial behavior. However, quantum simulators are being developed to efficiently solve quantum problems that are intractable via conventional computing. Some of the most successful quantum simulators are based on ion traps. Their success depends on the ability to achieve long coherence time, precise spin control, and high fidelity in state preparation. In this work, I present calculations that characterizes the oblate Paul trap that creates two-dimensional Coulomb crystals in a triangular lattice and phonon modes. We also calculate the spin-spin Ising-like interaction that can be generated in the oblate Paul trap using the same techinques as the linear radiofrequency Paul trap. In addition, I discuss two possible challenges that arise in the Penning trap: the effects of defects ( namely when Be+ → BeH+) and the creation of a more uniform spin-spin Ising-like interaction. We show that most properties are not significantly influenced by the appearance of defects, and that by adding two potentials to the Penning trap a more uniform spin-spin Ising-like interaction can be achieved. Next, I discuss techniques tfor preparing the ground state of the Ising-like Hamiltonian. In particular, we explore the use of the bang-bang protocol to prepare the ground state and compare optimized results to conventional adiabatic ramps ( the exponential and locally adiabatic ramp ). The bang-bang optimization in general outperforms the exponential; however the locally adiabatic ramp consistently is somewhat better. However, compared to the locally adiabatic ramp, the bang-bang optimization is simpler to implement, and it has the advantage of providingrovide a simple procedure for estimating the

  20. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  1. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  2. Socio-Technical Implementation: Socio-technical Systems in the Context of Ubiquitous Computing, Ambient Intelligence, Embodied Virtuality, and the Internet of Things

    NARCIS (Netherlands)

    Nijholt, Antinus; Whitworth, B.; de Moor, A.

    2009-01-01

    In which computer science world do we design and implement our socio-technical systems? About every five or ten years new computer and interaction paradigms are introduced. We had the mainframe computers, the various generations of computers, including the Japanese fifth generation computers, the

  3. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  4. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  5. Computational design of RNA parts, devices, and transcripts with kinetic folding algorithms implemented on multiprocessor clusters.

    Science.gov (United States)

    Thimmaiah, Tim; Voje, William E; Carothers, James M

    2015-01-01

    With progress toward inexpensive, large-scale DNA assembly, the demand for simulation tools that allow the rapid construction of synthetic biological devices with predictable behaviors continues to increase. By combining engineered transcript components, such as ribosome binding sites, transcriptional terminators, ligand-binding aptamers, catalytic ribozymes, and aptamer-controlled ribozymes (aptazymes), gene expression in bacteria can be fine-tuned, with many corollaries and applications in yeast and mammalian cells. The successful design of genetic constructs that implement these kinds of RNA-based control mechanisms requires modeling and analyzing kinetically determined co-transcriptional folding pathways. Transcript design methods using stochastic kinetic folding simulations to search spacer sequence libraries for motifs enabling the assembly of RNA component parts into static ribozyme- and dynamic aptazyme-regulated expression devices with quantitatively predictable functions (rREDs and aREDs, respectively) have been described (Carothers et al., Science 334:1716-1719, 2011). Here, we provide a detailed practical procedure for computational transcript design by illustrating a high throughput, multiprocessor approach for evaluating spacer sequences and generating functional rREDs. This chapter is written as a tutorial, complete with pseudo-code and step-by-step instructions for setting up a computational cluster with an Amazon, Inc. web server and performing the large numbers of kinefold-based stochastic kinetic co-transcriptional folding simulations needed to design functional rREDs and aREDs. The method described here should be broadly applicable for designing and analyzing a variety of synthetic RNA parts, devices and transcripts.

  6. Verification Benchmarks to Assess the Implementation of Computational Fluid Dynamics Based Hemolysis Prediction Models.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin; Horner, Marc; Malinauskas, Richard A; Myers, Matthew R

    2015-09-01

    As part of an ongoing effort to develop verification and validation (V&V) standards for using computational fluid dynamics (CFD) in the evaluation of medical devices, we have developed idealized flow-based verification benchmarks to assess the implementation of commonly cited power-law based hemolysis models in CFD. Verification process ensures that all governing equations are solved correctly and the model is free of user and numerical errors. To perform verification for power-law based hemolysis modeling, analytical solutions for the Eulerian power-law blood damage model (which estimates hemolysis index (HI) as a function of shear stress and exposure time) were obtained for Couette and inclined Couette flow models, and for Newtonian and non-Newtonian pipe flow models. Subsequently, CFD simulations of fluid flow and HI were performed using Eulerian and three different Lagrangian-based hemolysis models and compared with the analytical solutions. For all the geometries, the blood damage results from the Eulerian-based CFD simulations matched the Eulerian analytical solutions within ∼1%, which indicates successful implementation of the Eulerian hemolysis model. Agreement between the Lagrangian and Eulerian models depended upon the choice of the hemolysis power-law constants. For the commonly used values of power-law constants (α  = 1.9-2.42 and β  = 0.65-0.80), in the absence of flow acceleration, most of the Lagrangian models matched the Eulerian results within 5%. In the presence of flow acceleration (inclined Couette flow), moderate differences (∼10%) were observed between the Lagrangian and Eulerian models. This difference increased to greater than 100% as the beta exponent decreased. These simplified flow problems can be used as standard benchmarks for verifying the implementation of blood damage predictive models in commercial and open-source CFD codes. The current study only used power-law model as an illustrative example to emphasize the need

  7. On the Implementation of a Cloud-Based Computing Test Bench Environment for Prolog Systems

    Directory of Open Access Journals (Sweden)

    Ricardo Gonçalves

    2017-10-01

    Full Text Available Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI software development technique. The key idea of CI is to let developers integrate their work as they produce it, instead of performing the integration at the end of each software module. In this paper, we extend a previous work on a benchmark suite for the YAP Prolog system, and we propose a fully automated test bench environment for Prolog systems, named Yet Another Prolog Test Bench Environment (YAPTBE, aimed to assist developers in the development and CI of Prolog systems. YAPTBE is based on a cloud computing architecture and relies on the Jenkins framework as well as a new Jenkins plugin to manage the underlying infrastructure. We present the key design and implementation aspects of YAPTBE and show its most important features, such as its graphical user interface (GUI and the automated process that builds and runs Prolog systems and benchmarks.

  8. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  9. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    Science.gov (United States)

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  10. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    Directory of Open Access Journals (Sweden)

    Ju-Chi Liu

    2016-01-01

    Full Text Available A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI. The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN, and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM and accuracy-recognition mode (AM, were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR. When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  11. Approaches to the implementation of the activity approach in teaching computer science students with mobile computing systems

    Directory of Open Access Journals (Sweden)

    Марина Александровна Григорьева

    2011-03-01

    Full Text Available This article examines the need to incorporate active approach in learning science, the creation and application in educational practice methodical system based on the use of mobile computing systems

  12. Leading Technological Change: A Qualitative Study of High School Leadership in the Implementation of One-To-One Computing

    Science.gov (United States)

    Cohen, Maureen McCallion

    2017-01-01

    The purpose of this basic qualitative study was to identify and understand the leadership strategies used by Massachusetts high school administrators during the early implementation (first four years) of one-to-one computing. The study was guided by two research questions: (1) How do high school administrators describe their experience leading the…

  13. An Evaluation of Interactive Computer Training to Teach Instructors to Implement Discrete Trials with Children with Autism

    Science.gov (United States)

    Pollard, Joy S.; Higbee, Thomas S.; Akers, Jessica S.; Brodhead, Matthew T.

    2014-01-01

    Discrete-trial instruction (DTI) is a teaching strategy that is often incorporated into early intensive behavioral interventions for children with autism. Researchers have investigated time- and cost-effective methods to train staff to implement DTI, including self-instruction manuals, video modeling, and interactive computer training (ICT). ICT…

  14. Staff Perspectives on the Use of a Computer-Based Concept for Lifestyle Intervention Implemented in Primary Health Care

    Science.gov (United States)

    Carlfjord, Siw; Johansson, Kjell; Bendtsen, Preben; Nilsen, Per; Andersson, Agneta

    2010-01-01

    Objective: The aim of this study was to evaluate staff experiences of the use of a computer-based concept for lifestyle testing and tailored advice implemented in routine primary health care (PHC). Design: The design of the study was a cross-sectional, retrospective survey. Setting: The study population consisted of staff at nine PHC units in the…

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  16. Implementation of depression screening in antenatal clinics through tablet computers: results of a feasibility study.

    Science.gov (United States)

    Marcano-Belisario, José S; Gupta, Ajay K; O'Donoghue, John; Ramchandani, Paul; Morrison, Cecily; Car, Josip

    2017-05-10

    Mobile devices may facilitate depression screening in the waiting area of antenatal clinics. This can present implementation challenges, of which we focused on survey layout and technology deployment. We assessed the feasibility of using tablet computers to administer a socio-demographic survey, the Whooley questions and the Edinburgh Postnatal Depression Scale (EPDS) to 530 pregnant women attending National Health Service (NHS) antenatal clinics across England. We randomised participants to one of two layout versions of these surveys: (i) a scrolling layout where each survey was presented on a single screen; or (ii) a paging layout where only one question appeared on the screen at any given time. Overall, 85.10% of eligible pregnant women agreed to take part. Of these, 90.95% completed the study procedures. Approximately 23% of participants answered Yes to at least one Whooley question, and approximately 13% of them scored 10 points of more on the EPDS. We observed no association between survey layout and the responses given to the Whooley questions, the median EPDS scores, the number of participants at increased risk of self-harm, and the number of participants asking for technical assistance. However, we observed a difference in the number of participants at each EPDS scoring interval (p = 0.008), which provide an indication of a woman's risk of depression. A scrolling layout resulted in faster completion times (median = 4 min 46 s) than a paging layout (median = 5 min 33 s) (p = 0.024). However, the clinical significance of this difference (47.5 s) is yet to be determined. Tablet computers can be used for depression screening in the waiting area of antenatal clinics. This requires the careful consideration of clinical workflows, and technology-related issues such as connectivity and security. An association between survey layout and EPDS scoring intervals needs to be explored further to determine if it corresponds to a survey layout effect

  17. Design, implementation and security of a typical educational laboratory computer network

    Directory of Open Access Journals (Sweden)

    Martin Pokorný

    2013-01-01

    Full Text Available Computer network used for laboratory training and for different types of network and security experiments represents a special environment where hazardous activities take place, which may not affect any production system or network. It is common that students need to have administrator privileges in this case which makes the overall security and maintenance of such a network a difficult task. We present our solution which has proved its usability for more than three years. First of all, four user requirements on the laboratory network are defined (access to educational network devices, to laboratory services, to the Internet, and administrator privileges of the end hosts, and four essential security rules are stipulated (enforceable end host security, controlled network access, level of network access according to the user privilege level, and rules for hazardous experiments, which protect the rest of the laboratory infrastructure as well as the outer university network and the Internet. The main part of the paper is dedicated to a design and implementation of these usability and security rules. We present a physical diagram of a typical laboratory network based on multiple circuits connecting end hosts to different networks, and a layout of rack devices. After that, a topological diagram of the network is described which is based on different VLANs and port-based access control using the IEEE 802.1x/EAP-TLS/RADIUS authentication to achieve defined level of network access. In the second part of the paper, the latest innovation of our network is presented that covers a transition to the system virtualization at the end host devices – inspiration came from a similar solution deployed at the Department of Telecommunications at Brno University of Technology. This improvement enables a greater flexibility in the end hosts maintenance and a simultaneous network access to the educational devices as well as to the Internet. In the end, a vision of a

  18. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    National Research Council Canada - National Science Library

    Pinthong, Watthanai; Muangruen, Panya; Suriyaphol, Prapat; Mairiang, Dumrong

    2016-01-01

    .... However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system...

  19. Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy

    Science.gov (United States)

    Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean

    2007-01-01

    In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…

  20. Computer Games in Pre-School Settings: Didactical Challenges when Commercial Educational Computer Games Are Implemented in Kindergartens

    Science.gov (United States)

    Vangsnes, Vigdis; Gram Okland, Nils Tore; Krumsvik, Rune

    2012-01-01

    This article focuses on the didactical implications when commercial educational computer games are used in Norwegian kindergartens by analysing the dramaturgy and the didactics of one particular game and the game in use in a pedagogical context. Our justification for analysing the game by using dramaturgic theory is that we consider the game to be…

  1. Short-term effects of implemented high intensity shoulder elevation during computer work

    DEFF Research Database (Denmark)

    Larsen, Mette K; Samani, Afshin; Madeleine, Pascal

    2009-01-01

    contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction...... on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. METHODS: 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder....... RESULTS: The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular) trapezius...

  2. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  3. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    Science.gov (United States)

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  4. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    Science.gov (United States)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  5. Implementation of a 3D mixing layer code on parallel computers

    Science.gov (United States)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  6. Design, Implementation and Evaluation of Parallel Pipelined STAP on Parallel Computers

    Science.gov (United States)

    1998-04-01

    parallel computers . In particular, the paper describes the issues involved in parallelization, our approach to parallelization and performance results...on an Intel Paragon. The paper also discusses the process of developing software for such an application on parallel computers when latency and

  7. Successful Implementation of a Computer-Supported Collaborative Learning System in Teaching E-Commerce

    Science.gov (United States)

    Ngai, E. W. T.; Lam, S. S.; Poon, J. K. L.

    2013-01-01

    This paper describes the successful application of a computer-supported collaborative learning system in teaching e-commerce. The authors created a teaching and learning environment for 39 local secondary schools to introduce e-commerce using a computer-supported collaborative learning system. This system is designed to equip students with…

  8. Defragging Computer/Videogame Implementation and Assessment in the Social Studies

    Science.gov (United States)

    McBride, Holly

    2014-01-01

    Students in this post-industrial technological age require opportunities for the acquisition of new skills, especially in the marketplace of innovation. A pedagogical strategy that is becoming more and more popular within social studies classrooms is the use of computer and video games as enhancements to everyday lesson plans. Computer/video games…

  9. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Energy Technology Data Exchange (ETDEWEB)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  10. Implementation of a blade element UH-60 helicopter simulation on a parallel computer architecture in real-time

    Science.gov (United States)

    Moxon, Bruce C.; Green, John A.

    1990-01-01

    A high-performance platform for development of real-time helicopter flight simulations based on a simulation development and analysis platform combining a parallel simulation development and analysis environment with a scalable multiprocessor computer system is described. Simulation functional decomposition is covered, including the sequencing and data dependency of simulation modules and simulation functional mapping to multiple processors. The multiprocessor-based implementation of a blade-element simulation of the UH-60 helicopter is presented, and a prototype developed for a TC2000 computer is generalized in order to arrive at a portable multiprocessor software architecture. It is pointed out that the proposed approach coupled with a pilot's station creates a setting in which simulation engineers, computer scientists, and pilots can work together in the design and evaluation of advanced real-time helicopter simulations.

  11. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  12. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    OpenAIRE

    E. A. Panidi

    2016-01-01

    The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes...

  13. CLaSS Computer Literacy Software: From Design to Implementation - A Three Year Student Evaluation

    Directory of Open Access Journals (Sweden)

    Ian Cole

    2006-12-01

    Full Text Available Both computer literacy and information retrieval techniques are required to undertake studies in higher education in the United Kingdom. This paper considers the research, development and the 3-year student evaluation of a piece of learning technology in computer and information literacy (CLaSS software. Students completed a questionnaire to examine their own assessment of knowledge and competence in computer and information literacy and based on this assessment CLaSS software was created to assist nursing students with computer and information literacy. This paper draws on existing literature and applies a specific learning model to the software while considering software engineering and user-centered design methodologies. The technical processes involved in designing and creating the software are briefly considered with software development data analysis discussed. A 3-year student evaluation of the software after it's release was undertaken to consider the long-term validity and usefulness of this software with the results analysed and discussed.

  14. COMPARING SEARCHING AND SORTING ALGORITHMS EFFICIENCY IN IMPLEMENTING COMPUTATIONAL EXPERIMENT IN PROGRAMMING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    R. Sagan

    2011-11-01

    Full Text Available This article considers different aspects which allow defining correctness of choosing sorting algorithms. Also some algorithms, needed for computational experiments for certain class of programs, are compared.

  15. Study protocol: implementation of a computer-assisted intervention for autism in schools: a hybrid type II cluster randomized effectiveness-implementation trial

    Directory of Open Access Journals (Sweden)

    Melanie Pellecchia

    2016-11-01

    Full Text Available Abstract Background The number of children diagnosed with autism has rapidly outpaced the capacities of many public school systems to serve them, especially under-resourced, urban school districts. The intensive nature of evidence-based autism interventions, which rely heavily on one-to-one delivery, has caused schools to turn to computer-assisted interventions (CAI. There is little evidence regarding the feasibility, effectiveness, and implementation of CAI in public schools. While CAI has the potential to increase instructional time for students with autism, it may also result in unintended consequences such as reduction in the amount of interpersonal (as opposed to computerized instruction students receive. The purpose of this study is to test the effectiveness of one such CAI—TeachTown—its implementation, and its effects on teachers’ use of other evidence-based practices. Methods This study protocol describes a type II hybrid cluster randomized effectiveness-implementation trial. We will train and coach 70 teachers in autism support classrooms in one large school district in the use of evidence-based practices for students with autism. Half of the teachers then will be randomly selected to receive training and access to TeachTown: Basics, a CAI for students with autism, for the students in their classrooms. The study examines: (1 the effectiveness of TeachTown for students with autism; (2 the extent to which teachers implement TeachTown the way it was designed (i.e., fidelity; and (3 whether its uptake increases or reduces the use of other evidence-based practices. Discussion This study will examine the implementation of new technology for children with ASD in public schools and will be the first to measure the effectiveness of CAI. As importantly, the study will investigate whether adding a new technology on top of existing practices increases or decreases their use. This study presents a unique method to studying both the

  16. Implementation of the Distributed Parallel Program for Geoid Heights Computation Using MPI and Openmp

    Science.gov (United States)

    Lee, S.; Kim, J.; Jung, Y.; Choi, J.; Choi, C.

    2012-07-01

    Much research have been carried out using optimization algorithms for developing high-performance program, under the parallel computing environment with the evolution of the computer hardware technology such as dual-core processor and so on. Then, the studies by the parallel computing in geodesy and surveying fields are not so many. The present study aims to reduce running time for the geoid heights computation and carrying out least-squares collocation to improve its accuracy using distributed parallel technology. A distributed parallel program was developed in which a multi-core CPU-based PC cluster was adopted using MPI and OpenMP library. Geoid heights were calculated by the spherical harmonic analysis using the earth geopotential model of the National Geospatial-Intelligence Agency(2008). The geoid heights around the Korean Peninsula were calculated and tested in diskless-based PC cluster environment. As results, for the computing geoid heights by a earth geopotential model, the distributed parallel program was confirmed more effective to reduce the computational time compared to the sequential program.

  17. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  18. The Implementation of Computer Platform for Foundries Cooperating in a Supply Chain

    Directory of Open Access Journals (Sweden)

    Wilk-Kołodziejczyk D.

    2014-08-01

    Full Text Available This article presents a practical solution in the form of implementation of agent-based platform for the management of contracts in a network of foundries. The described implementation is a continuation of earlier scientific work in the field of design and theoretical system specification for cooperating companies [1]. The implementation addresses key design assumptions - the system is implemented using multi-agent technology, which offers the possibility of decentralisation and distributed processing of specified contracts and tenders. The implemented system enables the joint management of orders for a network of small and medium-sized metallurgical plants, while providing them with greater competitiveness and the ability to carry out large procurements. The article presents the functional aspects of the system - the user interface and the principle of operation of individual agents that represent businesses seeking potential suppliers or recipients of services and products. Additionally, the system is equipped with a bi-directional agent translating standards based on ontologies, which aims to automate the decision-making process during tender specifications as a response to the request.

  19. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Goyal, Sandeep K. [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); Institute of Quantum Science and Technology, University of Calgary, Alberta T2N 1N4 (Canada); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2016-05-20

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  20. A grid-enabled lightweight computational steering client: a .NET PDA implementation.

    Science.gov (United States)

    Kalawsky, R S; Nee, S P; Holmes, I; Coveney, P V

    2005-08-15

    The grid has been developed to support large-scale computer simulations in a diverse range of scientific and engineering fields. Consequently, the increasing availability of powerful distributed computing resources is changing how scientists undertake large-scale modelling/simulation. Instead of being limited to local computing resources, scientists are now able to make use of supercomputing facilities around the world. These grid resources comprise specialized distributed three-dimensional visualization environments through to massive computational systems. The scientist usually accesses these resources from reasonably high-end desktop computers. Even though most modern desktop computers are provided with reasonably powerful three-dimensional graphical hardware, not all scientific applications require high-end three-dimensional visualization because the data of interest is essentially numerical or two-dimensional graphical data. For these applications, a much simpler two-dimensional graphical displays can be used. Since large jobs can take many hours to complete the scientist needs access to a technology that will allow them to still monitor and control their job while away from their desks. This paper describes an effective method of monitoring and controlling a set of chained computer simulations by means of a lightweight steering client based on a small personal digital assistant (PDA). The concept of using a PDA to steer a series of computational jobs across a supercomputing resource may seem strange at first but when scientists realize they can use these devices to connect to their computation wherever there is a wireless network (or cellular phone network) the concept becomes very compelling. Apart from providing a much needed easy-to-use interface, the PDA-based steering client has the benefit of freeing the scientist from the desktop. It is during this monitoring stage that the hand-held PDA client is of particular value as it gives the application

  1. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  2. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  3. The design, marketing, and implementation of online continuing education about computers and nursing informatics.

    Science.gov (United States)

    Sweeney, Nancy M; Saarmann, Lembi; Seidman, Robert; Flagg, Joan

    2006-01-01

    Asynchronous online tutorials using PowerPoint slides with accompanying audio to teach practicing nurses about computers and nursing informatics were designed for this project, which awarded free continuing education units to completers. Participants had control over the advancement of slides, with the ability to repeat when desired. Graphics were kept to a minimum; thus, the program ran smoothly on computers using dial-up modems. The tutorials were marketed in live meetings and through e-mail messages on nursing listservs. Findings include that the enrollment process must be automated and instantaneous, the program must work from every type of computer and Internet connection, marketing should be live and electronic, and workshops should be offered to familiarize nurses with the online learning system.

  4. The Geospatial Data Cloud: An Implementation of Applying Cloud Computing in Geosciences

    Directory of Open Access Journals (Sweden)

    Xuezhi Wang

    2014-11-01

    Full Text Available The rapid growth in the volume of remote sensing data and its increasing computational requirements bring huge challenges for researchers as traditional systems cannot adequately satisfy the huge demand for service. Cloud computing has the advantage of high scalability and reliability, which can provide firm technical support. This paper proposes a highly scalable geospatial cloud platform named the Geospatial Data Cloud, which is constructed based on cloud computing. The architecture of the platform is first introduced, and then two subsystems, the cloud-based data management platform and the cloud-based data processing platform, are described.  ––– This paper was presented at the First Scientific Data Conference on Scientific Research, Big Data, and Data Science, organized by CODATA-China and held in Beijing on 24-25 February, 2014.

  5. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    Science.gov (United States)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  6. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  7. Implementing cognitive learning strategies in computer-based educational technology: a proposed system.

    Science.gov (United States)

    Wang, M. J.; Contino, P. B.; Ramirez, E. S.

    1997-01-01

    Switching the development focus of computer-based instruction from the concerns of delivery technology to the fundamentals of instructional methodology, is a notion that has received increased attention among educational theorists and instructional designers over the last several years. Building upon this precept, a proposed methodology and computer support system is presented for distilling educational objectives into concept maps using strategies derived from cognitive theory. Our system design allows for a flexible and extensible architecture in which an educator can create instructional modules that encapsulate their teaching strategies, and mimics the adaptive behavior used by experienced instructors in teaching complex educational objectives. PMID:9357716

  8. The European Patent Office and its handling of Computer Implemented Inventions

    CERN Multimedia

    CERN. Geneva; Weber, Georg

    2014-01-01

    Georg Weber joined the EPO in 1988 and is director since more than 10 years. He started his career in the office initially as a patent examiner and worked in different technical areas of chemistry and mechanics. Birger Koblitz is patent examiner at the EPO in Munich in the technical field of computer security. Before joining the office in 2009, he earned a PhD in Experimental Particle Physics from the University of Hamburg, and worked at CERN in the IT department supporting the experiments in their Grid Computing activitie...

  9. Implementation of an emergency department computer system: design features that users value.

    Science.gov (United States)

    Batley, Nicholas J; Osman, Hibah O; Kazzi, Amin A; Musallam, Khaled M

    2011-12-01

    Electronic medical records (EMRs) can potentially improve the efficiency and effectiveness of patient care, especially in the emergency department (ED) setting. Multiple barriers to implementation of EMR have been described. One important barrier is physician resistance. The "ED Dashboard" is an EMR developed in a busy tertiary care hospital ED. Its implementation was exceptionally smooth and successful. We set out to examine the design features used in the development of the system and assess which of these features played an important role in the successful implementation of the ED Dashboard. An anonymous survey of users of the ED Dashboard was conducted in January and February 2009 to evaluate their perceptions of the degree of success of the implementation and the importance of the design features used in that success. Results were analyzed using SPSS software (SPSS Inc., Chicago, IL). Of the 188 end-users approached, 175 (93%) completed the survey. Despite minimal training in the use of the system, 163 (93%) perceived the system as easy or extremely easy to use. Users agreed that the design features employed were important contributors to the system's success. Being alerted when new test results were ready, the use of "most common" lists, and the use of color were features that were considered valuable to users. Success of a medical information system in a busy ED is, in part, dependent on careful attention to subtle details of system design. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Implementing a New Cloud Computing Library Management Service: A Symbiotic Approach

    Science.gov (United States)

    Dula, Michael; Jacobsen, Lynne; Ferguson, Tyler; Ross, Rob

    2012-01-01

    This article presents the story of how Pepperdine University migrated its library management functions to the cloud using what is now known as OCLC's WorldShare Management Services (WMS). The story of implementing this new service is told from two vantage points: (1) that of the library; and (2) that of the service provider. The authors were the…

  11. Toward Implementing Computer-Assisted Foreign Language Assessment in the Official Spanish University Entrance Examination

    Science.gov (United States)

    Sanz, Ana Gimeno; Pavón, Ana Sevilla

    2015-01-01

    In 2008 the Spanish Government announced the inclusion of an oral section in the foreign language exam of the National University Entrance Examination during the year 2012 (Royal Decree 1892/2008, of 14 November 2008, Ministerio de Educación, Gobierno de España, 2008). Still awaiting the implementation of these changes, and in an attempt to offer…

  12. Implementing a low-latency parallel graphic equalizer with heterogeneous computing

    NARCIS (Netherlands)

    Norilo, Vesa; Verstraelen, Martinus Johannes Wilhelmina; Valimaki, Vesa; Svensson, Peter; Kristiansen, Ulf

    2015-01-01

    This paper describes the implementation of a recently introduced parallel graphic equalizer (PGE) in a heterogeneous way. The control and audio signal processing parts of the PGE are distributed to a PC and to a signal processor, of WaveCore architecture, respectively. This arrangement is

  13. Design Considerations for Implementing a Shipboard Computer Supported Command Management System

    Science.gov (United States)

    1976-06-01

    considerations that must also be taken into account when selecting a system. Reference 1 provides a comprehensive checklist for utilization in system...In Implementing a Data Processing System Un a Hnicompu^er," Hashers Tresis, "WEar^cn "School or Finance an^Tommerce, 1974. 16. Sperry Onivac, Dse of

  14. Research and realization implementation of monitor technology on illegal external link of classified computer

    Science.gov (United States)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  15. Using Computer-Assisted Instruction to Build Math Fact Fluency: An Implementation Guide

    Science.gov (United States)

    Hawkins, Renee O.; Collins, Tai; Hernan, Colleen; Flowers, Emily

    2017-01-01

    Research findings support the use of computer-assisted instruction (CAI) as a curriculum supplement for improving math skills, including math fact fluency. There are a number of websites and mobile applications (i.e., apps) designed to build students' math fact fluency, but the options can become overwhelming. This article provides implementation…

  16. Design and implementation of an integrated computer working environment for doing mathematics and science

    NARCIS (Netherlands)

    Heck, A.; Kedzierska, E.; Ellermeijer, T.

    2009-01-01

    In this paper we report on the sustained research and development work at the AMSTEL Institute of the University of Amsterdam to improve mathematics and science education at primary and secondary school level, which has lead amongst other things to the development of the integrated computer working

  17. Computer Manipulatives in an Ordinary Differential Equations Course: Development, Implementation, and Assessment

    Science.gov (United States)

    Miller, Haynes R.; Upton, Deborah S.

    2008-01-01

    The d'Arbeloff Interactive Mathematics Project or d'AIMP is an initiative that seeks to enhance and ultimately transform the teaching and learning of introductory mathematics at the Massachusetts Institute of Technology. A result of this project is a suite of "mathlets," a carefully developed set of dynamic computer applets for use in the…

  18. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    Science.gov (United States)

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  19. Design and Implementation of an Integrated Computer Working Environment for Doing Mathematics and Science

    Science.gov (United States)

    Heck, Andre; Kedzierska, Ewa; Ellermeijer, Ton

    2009-01-01

    In this paper we report on the sustained research and development work at the AMSTEL Institute of the University of Amsterdam to improve mathematics and science education at primary and secondary school level, which has lead amongst other things to the development of the integrated computer working environment Coach 6. This environment consists of…

  20. A Framework and Implementation of User Interface and Human-Computer Interaction Instruction

    Science.gov (United States)

    Peslak, Alan

    2005-01-01

    Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…

  1. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Phillips, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wampler, Cheryl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meisner, Robert [National Nuclear Security Administration (NNSA), Washington, DC (United States)

    2010-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality, and scientific details); to quantify critical margins and uncertainties; and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  2. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive

  3. Improving the accessibility at home: implementation of a domotic application using a p300-based brain computer interface system

    Directory of Open Access Journals (Sweden)

    Rebeca Corralejo Palacios

    2012-05-01

    Full Text Available The aim of this study was to develop a Brain Computer Interface (BCI application to control domotic devices usually present at home. Previous studies have shown that people with severe disabilities, both physical and cognitive ones, do not achieve high accuracy results using motor imagery-based BCIs. To overcome this limitation, we propose the implementation of a BCI application using P300 evoked potentials, because neither extensive training nor extremely high concentration level are required for this kind of BCIs. The implemented BCI application allows to control several devices as TV, DVD player, mini Hi-Fi system, multimedia hard drive, telephone, heater, fan and lights. Our aim is that potential users, i.e. people with severe disabilities, are able to achieve high accuracy. Therefore, this domotic BCI application is useful to increase

  4. Implementation of Constrained DFT for Computing Charge Transfer Rates within the Projector Augmented Wave Method

    DEFF Research Database (Denmark)

    Melander, Marko; Jónsson, Elvar Örn; Mortensen, Jens Jørgen

    2016-01-01

    frozen-core electron description across the whole periodic table, with good transferability, as well as facilitate the extraction of all-electron quantities. The present implementation is applicable to two different wave function representations, atomic-centered basis sets (LCAO) and the finite...... of Marcus theory. Here, the combined method is applied to important test cases where practical implementations of DFT fail due to the self-interaction error, such as the dissociation of the helium dimer cation, and it is compared to other established cDFT codes. Moreover, for charge localization...... in a diamine cation, where it was recently shown that the commonly used generalized gradient and hybrid functionals of DFT failed to produce the localized state, cDFT produces qualitatively and quantitatively accurate results when benchmarked against self-interaction corrected DFT and high-level CCSD...

  5. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    JIANG Lan

    2013-02-01

    Full Text Available A kind of anti-saturated digital PI regulator is designed and implemented based on DSP.This PI regulator was applied to the system design of voltage and current double-loop control in a BUCK converter and related experimental research was made in a 5.5 KW prototype machine.Experimental results show that the converter has good static and dynamic performances and the validity of the design of the PI regulator is verified.

  6. Computer-Managed Instruction: Theory, Application, and Some Key Implementation Issues.

    Science.gov (United States)

    1984-03-01

    Orlando, Florida, May 10-14, 1982. k 118 89. Fullan , Michael and Park, Paul, "Curriculum Implementation," Ontario Ministry of Education, 1981. 90...ISSUES by 0 Michael Korbak, Jr. March 1984 LUL-4_ Thesis Advisor: Norman R. Lyons 1 Approved for public release; distribution unlimited S 10 10 044... Michael Korbak, Jr. S. PERFORMING ORGANIZATION NAME AND ADDRESS 0. PROGRAM ELEMENT. PROJECT. TASK AREA & WORK UNIT NUMBERS Naval Postgraduate School

  7. A Feasibility Study of Implementing a Bring-Your-Own-Computing-Device Policy

    Science.gov (United States)

    2013-12-01

    needs to be thoroughly understood; otherwise, implementation of a BYOD strategy could spell disaster. If one does not understand software, the types...However, to meet the demands to support faculty and students to maintain academic standards, distance learning programs require constant innovation and...CPU @ 3.2GHz; • Installed RAM: 64GB; • 64-bit Operating System; Windows Server 2008 R2 Standard; • Drive C: Samsung SSD 830, 512GB; • Drive D

  8. Automated Verification for Secure Messaging Protocols and Their Implementations: A Symbolic and Computational Approach

    OpenAIRE

    Kobessi, Nadim; Bhargavan, Karthikeyan; Blanchet, Bruno

    2017-01-01

    International audience; Many popular web applications incorporate end-to-end secure messaging protocols, which seek to ensure that messages sent between users are kept confidential and authenticated , even if the web application's servers are broken into or otherwise compelled into releasing all their data. Protocols that promise such strong security guarantees should be held up to rigorous analysis, since protocol flaws and implementations bugs can easily lead to real-world attacks. We propo...

  9. Implementation of the Lucas-Kanade image registration algorithm on a GPU for 3D computational platform stabilisation

    CSIR Research Space (South Africa)

    Duvenhage, B

    2010-06-01

    Full Text Available the wide-angle image to remove the undulatory motion of the plat- form. Removing the effects of the platform motion in this way com- putationally stabilises the surveillance system for effective fore- ground/background separation and tracking... and parallel implementation that: Can flexibly balance the load between the CPU and GPU to optimally make use of the available resources, and would be efficient enough to enable 3D computational plat- form stabilisation in real-time. Execution at 20...

  10. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrickson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  11. On a concept of computer game implementation based on a temporal logic

    Science.gov (United States)

    Szymańska, Emilia; Adamek, Marek J.; Mulawka, Jan J.

    2017-08-01

    Time is a concept which underlies all the contemporary civilization. Therefore, it was necessary to create mathematical tools that allow a precise way to describe the complex time dependencies. One such tool is temporal logic. Its definition, description and characteristics will be presented in this publication. Then the authors will conduct a discussion on the usefulness of this tool in context of creating storyline in computer games such as RPG genre.

  12. Implementation of computer-based quality-of-life monitoring in brain tumor outpatients in routine clinical practice.

    Science.gov (United States)

    Erharter, Astrid; Giesinger, Johannes; Kemmler, Georg; Schauer-Maurer, Gabriele; Stockhammer, Guenter; Muigg, Armin; Hutterer, Markus; Rumpold, Gerhard; Sperner-Unterweger, Barbara; Holzner, Bernhard

    2010-02-01

    Computerized assessment of quality of life (QOL) in patients with brain tumors can be an essential part of quality assurance with regard to evidence-based medicine in neuro-oncology. The aim of this project was the implementation of a computer-based QOL monitoring tool in a neurooncology outpatient unit. A further aim was to derive reference values for QOL scores from the collected data to improve interpretability. Since August 2005, patients with brain tumors treated at the neuro-oncology outpatient unit of the Innsbruck Medical University were consecutively included in the study. QOL assessment (European Organisation for Research and Treatment of Cancer [EORTC] Quality of Life Questionnaire [QLQ-C30] plus the EORTC QLQ-brain cancer module [BN20]) was computer-based, using a software tool called the Computer-based Health Evaluation System. A total of 110 patients with primary brain tumors (49% female; mean [standard deviation] age 47.9 [12.6] years; main diagnoses: 30.9% astrocytoma, 17.3% oligodendroglioma, 17.3% glioblastoma, 13.6% meningioma) was included in the study. On average, QOL was assessed 4.74 times per patient, 521 times in total. The user-friendly software was successfully implemented and tested. The routine QOL assessment was found to be feasible and was well accepted by both physicians and patients. The software-generated graphic QOL profiles were found to be an important tool for screening patients for clinically relevant problems. Thus, computer-based QOL monitoring can contribute to an optimization of treatment (e.g., symptom management, psychosocial interventions) and facilitate data collection for research purposes. Copyright 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  13. Implementation of Private Cloud Computing Using Integration of JavaScript and Python

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper deals with the design and deployment of a novel library class in Python, enabling the use of JavaScript functionalities in Application Programming and the leveraging of this Library into development for third generation technologies such as Private Cloud Computing. The integration of these two prevalent languages provides us with a new level of compliance which helps in developing an understanding between Web Programming and Application Programming. An inter-browser functionality wrapping, which would enable users to have a JavaScript experience in Python interfaces directly, without having to depend on external programs, has been developed. The functionality of this concept is prevalent in the fact that Applications written in JavaScript and accessed on the browser now have the capability of interacting with each other on a common platform with the help of a Python wrapper. The idea is demonstrated by the integrating with the now ubiquitous Cloud Computing concept. With the help of examples, we have showcased the same and explained how the Library XOCOM can be a stepping stone to flexible cloud computing environment.

  14. A Methodology for Decision Support for Implementation of Cloud Computing IT Services

    Directory of Open Access Journals (Sweden)

    Adela Tušanová

    2014-07-01

    Full Text Available The paper deals with the decision of small and medium-sized software companies in transition to SaaS model. The goal of the research is to design a comprehensive methodic to support decision making based on actual data of the company itself. Based on a careful analysis, taxonomy of costs, revenue streams and decision-making criteria are proposed in the paper. On the basis of multi-criteria decision-making methods, each alternative is evaluated and the alternative with the highest score is identified as the most appropriate. The proposed methodic is implemented as a web application and verified through  case studies.

  15. Smart learning objects for smart education in computer science theory, methodology and robot-based implementation

    CERN Document Server

    Stuikys, Vytautas

    2015-01-01

    This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist a

  16. DESIGN AND IMPLEMENTATION OF REGIONAL MEDICAL INFORMATICS SYSTEM WITH USE OF CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Alexey A. Ponomarev

    2013-01-01

    Full Text Available The article deals with the situation in the market of healthcare information systems in Russia and with legislative preconditions of development in this sphere. The task of creation of regional information system is highlighted. On the basis of analysis of approaches and foreign experience the way of realization of a regional segment in the state system through the regional healthcare portal with the application of cloud computing was offered. The developed module «Electronic Registry» is discussed as an example of practical realization.

  17. Design and Implementation of 3 Axis CNC Router for Computer Aided Manufacturing Courses

    Directory of Open Access Journals (Sweden)

    Aktan Mehmet Emin

    2016-01-01

    Full Text Available In this paper, it is intended to make the mechanical design of 3 axis Computer Numerical Control (CNC router with linear joints, production of electronic control interface cards and drivers and manufacturing of CNC router system which is a combination of mechanics and electronics. At the same time, interface program has been prepared to control router via USB. The router was developed for educational purpose. In some vocational schools and universities, Computer Aided Manufacturing (CAM courses are though rather theoretical. This situation cause ineffective and temporary learning. Moreover, students at schools which have the opportunity to apply for these systems can face with various dangerous accidents. Because of this situation, these students start to get knowledge about this system for the first time. For the first steps of CNC education, using smaller and less dangerous systems will be easier. A new concept CNC machine and its user interface suitable and profitable for education have been completely designed and realized during this study. To test the validity of the hypothesis which the benefits that may exist on the educational life, enhanced traditional education method with the contribution of the designed machine has been practiced on CAM course students for a semester. At the end of the semester, the new method applied students were more successful in the rate of 27.36 percent both in terms of verbal comprehension and exam grades.

  18. The implementation of the graphics of program EAGLE: A numerical grid generation code on NASA Langley SNS computer system

    Science.gov (United States)

    Houston, Johnny L.

    1989-01-01

    Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) Numerical Grid Generation System is a composite (multi-block) algebraic or elliptic grid generation system designed to discretize the domain in and/or around any arbitrarily shaped three dimensional regions. This system combines a boundary conforming surface generation scheme and includes plotting routines designed to take full advantage of the DISSPLA Graphics Package (Version 9.0). Program EAGLE is written to compile and execute efficiently on any Cray machine with or without solid state disk (SSD) devices. Also, the code uses namelist inputs which are supported by all Cray machines using the FORTRAN compiler CFT77. The namelist inputs makes it easier for the user to understand the inputs and operation of Program EAGLE. EAGLE's numerical grid generator is constructed in the following form: main program, EGG (executive routine); subroutine SURFAC (surface generation routine); subroutine GRID (grid generation routine); and subroutine GRDPLOT (grid plotting routines). The EAGLE code was modified to use on the NASA-LaRC SNS computer (Cray 2S) system. During the modification a conversion program was developed for the output data of EAGLE's subroutine GRID to permit the data to be graphically displayed by IRIS workstations, using Plot3D. The code of program EAGLE was modified to make operational subroutine GRDPLOT (using DI-3000 Graphics Software Packages) on the NASA-LaRC SNS Computer System. How to implement graphically, the output data of subroutine GRID was determined on any NASA-LaRC graphics terminal that has access to the SNS Computer System DI-300 Graphics Software Packages. A Quick Reference User Guide was developed for the use of program EAGLE on the NASA-LaRC SNS Computer System. One or more application program(s) was illustrated using program EAGLE on the NASA LaRC SNS Computer System, with emphasis on graphics illustrations.

  19. Research and implementation of PC data synchronous backup based on cloud computing

    Directory of Open Access Journals (Sweden)

    WU Yu

    2013-08-01

    Full Text Available In order to better ensure data security,data integrity,and facilitate remote management,this paper has designed and implemented a system model for PC data synchronous backup from the view of the local database and personal data. It focuses on the data backup and uses SQL Azure( a cloud database management system and Visual Studio( a development platform tool . Also the system is released and deployed on the Windows Azure Platform with a unique web portal. Experimental tests show that compared to other data backup methods in non-cloud environment,the system has certain advantages and research value on mobility,interoperability and data management.

  20. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  1. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    Science.gov (United States)

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  2. Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations.

    Science.gov (United States)

    Hayes, Andrew F; Matthes, Jörg

    2009-08-01

    Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.

  3. Computer implemented classification of vegetation using aircraft acquired multispectral scanner data

    Science.gov (United States)

    Cibula, W. G.

    1975-01-01

    The use of aircraft 24-channel multispectral scanner data in conjunction with computer processing techniques to obtain an automated classification of plant species association was discussed. The classification of various plant species associations was related to information needed for specific applications. In addition, the necessity for multiple selection of training fields for a single class in situations where the study area consists of highly irregular terrain was detailed. A single classification was illuminated differently in different areas, resulting in the existence of multiple spectral signatures for a given class. These different signatures result since different qualities of radiation upwell to the detector from portions that have differing qualities of incident radiation. Techniques of training field selection were outlined, and a classification obtained from a natural area in Tishomingo State Park in northern Mississippi was presented.

  4. An Analysis of High School Math, Science, Social Studies, English, and Foreign Language Teachers' Implementation of One-to-One Computing and Their Pedagogical Practices

    Science.gov (United States)

    Inserra, Albert; Short, Thomas

    2013-01-01

    The purpose of this study was to compare high school Math, Science, Social Studies, English, and Foreign Language teachers' implementation of teaching practices in terms of their pedagogical dimensions in a one-to-one computing environment. A survey was developed to measure high school teachers' implementation of teaching practices associated with…

  5. The Identification, Implementation, and Evaluation of Critical User Interface Design Features of Computer-Assisted Instruction Programs in Mathematics for Students with Learning Disabilities

    Science.gov (United States)

    Seo, You-Jin; Woo, Honguk

    2010-01-01

    Critical user interface design features of computer-assisted instruction programs in mathematics for students with learning disabilities and corresponding implementation guidelines were identified in this study. Based on the identified features and guidelines, a multimedia computer-assisted instruction program, "Math Explorer", which delivers…

  6. Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    Directory of Open Access Journals (Sweden)

    Helmberger Thomas

    2001-10-01

    Full Text Available Abstract Background Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. Methods The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS. It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology 2. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. Results During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. Conclusion Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring.

  7. Planning and Implementation of tool path computer controlled polishing optical surfaces

    Science.gov (United States)

    Yu, X. B.; Zhang, F. H.; Zhang, Y.; Lin, Y. Y.; Fu, P. Q.

    2010-10-01

    The application of 'small tool' based on computer controlled is a breakthrough in modern optical machining. Dwell time distribution calculated by the iterative convolution algorithm is normally expressed by the points of intersection on the grid; however, the polishing tool path is composed of multi-segment polylines in fact. Therefore, it is required to calculate the time required when the polishing tool moved along the polylines on the workpiece surface before polishing. The algorithm to calculate the dwell time on each polyline of the tool path has been developed, thereby the deterministic material removal on workpiece surface by polishing tool can be achieved. A tool path algorithm based on fractal geometry has been developed, and the points of intersection on the grid are used as the endpoints of polylines directly. Each polyline dwell time is presented by the mean value of dwell time of two endpoints. A group of surface error data is simulated with actual parameters by spiral path and fractal path with the same number nodes. The contrasted results show that most of error results of the fractal path are better than spiral path. The intersection algorithm has been developed to optimize fractal path, thereby the fractal path can be used in polishing workpiece surface with various border efficiently.

  8. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    Science.gov (United States)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  9. AC-DCFS: a toolchain implementation to Automatically Compute Coulomb Failure Stress changes after relevant earthquakes.

    Science.gov (United States)

    Alvarez-Gómez, José A.; García-Mayordomo, Julián

    2017-04-01

    We present an automated free software-based toolchain to obtain Coulomb Failure Stress change maps on fault planes of interest following the occurrence of a relevant earthquake. The system uses as input the focal mechanism data of the event occurred and an active fault database for the region. From the focal mechanism the orientations of the possible rupture planes, the location of the event and the size of the earthquake are obtained. From the size of the earthquake, the dimensions of the rupture plane are obtained by means of an algorithm based on empirical relations. Using the active fault database in the area, the stress-receiving planes are obtained and a verisimilitude index is assigned to the source plane from the two nodal planes of the focal mechanism. The obtained product is a series of layers in a format compatible with any type of GIS (or map completely edited in PDF format) showing the possible stress change maps on the different families of fault planes present in the epicentral zone. These type of products are presented generally in technical reports developed in the weeks following the occurrence of the event, or in scientific publications; however they have been proven useful for emergency management in the hours and days after a major event being these stress changes responsible of aftershocks, in addition to the mid-term earthquake forecasting. The automation of the calculation allows its incorporation within the products generated by the alert and surveillance agencies shortly after the earthquake occurred. It is now being implemented in the Spanish Geological Survey as one of the products that this agency would provides after the occurrence of relevant seismic series in Spain.

  10. Time expenditure in computer aided time studies implemented for highly mechanized forest equipment

    Directory of Open Access Journals (Sweden)

    Elena Camelia Mușat

    2016-06-01

    Full Text Available Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01. Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce

  11. Radiation dose reduction in computed tomography (CT) using a new implementation of wavelet denoising in low tube current acquisitions

    Science.gov (United States)

    Tao, Yinghua; Brunner, Stephen; Tang, Jie; Speidel, Michael; Rowley, Howard; VanLysel, Michael; Chen, Guang-Hong

    2011-03-01

    Radiation dose reduction remains at the forefront of research in computed tomography. X-ray tube parameters such as tube current can be lowered to reduce dose; however, images become prohibitively noisy when the tube current is too low. Wavelet denoising is one of many noise reduction techniques. However, traditional wavelet techniques have the tendency to create an artificial noise texture, due to the nonuniform denoising across the image, which is undesirable from a diagnostic perspective. This work presents a new implementation of wavelet denoising that is able to achieve noise reduction, while still preserving spatial resolution. Further, the proposed method has the potential to improve those unnatural noise textures. The technique was tested on both phantom and animal datasets (Catphan phantom and timeresolved swine heart scan) acquired on a GE Discovery VCT scanner. A number of tube currents were used to investigate the potential for dose reduction.

  12. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    Science.gov (United States)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  13. Clinical Implementation of Intrafraction Cone Beam Computed Tomography Imaging During Lung Tumor Stereotactic Ablative Radiation Therapy

    Science.gov (United States)

    Li, Ruijiang; Han, Bin; Meng, Bowen; Maxim, Peter G.; Xing, Lei; Koong, Albert C.; Diehn, Maximilian; Loo, Billy W.

    2013-01-01

    Purpose To develop and clinically evaluate a volumetric imaging technique for assessing intrafraction geometric and dosimetric accuracy of stereotactic ablative radiation therapy (SABR). Methods and Materials Twenty patients received SABR for lung tumors using volumetric modulated arc therapy (VMAT). At the beginning of each fraction, pretreatment cone beam computed tomography (CBCT) was used to align the soft-tissue tumor position with that in the planning CT. Concurrent with dose delivery, we acquired fluoroscopic radiograph projections during VMAT using the Varian on-board imaging system. Those kilovolt projections acquired during megavolt beam-on were automatically extracted, and intrafraction CBCT images were reconstructed using the filtered backprojection technique. We determined the time-averaged target shift during VMAT by calculating the center of mass of the tumor target in the intrafraction CBCT relative to the planning CT. To estimate the dosimetric impact of the target shift during treatment, we recalculated the dose to the GTV after shifting the entire patient anatomy according to the time-averaged target shift determined earlier. Results The mean target shift from intrafraction CBCT to planning CT was 1.6, 1.0, and 1.5 mm; the 95th percentile shift was 5.2, 3.1, 3.6 mm; and the maximum shift was 5.7, 3.6, and 4.9 mm along the anterior-posterior, left-right, and superior-inferior directions. Thus, the time-averaged intrafraction gross tumor volume (GTV) position was always within the planning target volume. We observed some degree of target blurring in the intrafraction CBCT, indicating imperfect breath-hold reproducibility or residual motion of the GTV during treatment. By our estimated dose recalculation, the GTV was consistently covered by the prescription dose (PD), that is, V100% above 0.97 for all patients, and minimum dose to GTV >100% PD for 18 patients and >95% PD for all patients. Conclusions Intrafraction CBCT during VMAT can provide

  14. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study

    Science.gov (United States)

    2012-01-01

    Background Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA), which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff) regarding the factors required for implementation of an interactive HRA on psychosocial health. Methods The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: “What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?” The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as ‘it made sense to them’. Participants also rated each item on a 5-point scale for its ‘importance’ and ‘action feasibility’ over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Results Overall, 54 providers participated (emergency care 48%; primary care 52%). Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis revealed a concept map with a

  15. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study.

    Science.gov (United States)

    Ahmad, Farah; Norman, Cameron; O'Campo, Patricia

    2012-12-19

    Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA), which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff) regarding the factors required for implementation of an interactive HRA on psychosocial health. The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: "What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?" The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as 'it made sense to them'. Participants also rated each item on a 5-point scale for its 'importance' and 'action feasibility' over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Overall, 54 providers participated (emergency care 48%; primary care 52%). Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis revealed a concept map with a seven-cluster solution: 1) the HRA

  16. What is needed to implement a computer-assisted health risk assessment tool? An exploratory concept mapping study

    Directory of Open Access Journals (Sweden)

    Ahmad Farah

    2012-12-01

    Full Text Available Abstract Background Emerging eHealth tools could facilitate the delivery of comprehensive care in time-constrained clinical settings. One such tool is interactive computer-assisted health-risk assessments (HRA, which may improve provider-patient communication at the point of care, particularly for psychosocial health concerns, which remain under-detected in clinical encounters. The research team explored the perspectives of healthcare providers representing a variety of disciplines (physicians, nurses, social workers, allied staff regarding the factors required for implementation of an interactive HRA on psychosocial health. Methods The research team employed a semi-qualitative participatory method known as Concept Mapping, which involved three distinct phases. First, in face-to-face and online brainstorming sessions, participants responded to an open-ended central question: “What factors should be in place within your clinical setting to support an effective computer-assisted screening tool for psychosocial risks?” The brainstormed items were consolidated by the research team. Then, in face-to-face and online sorting sessions, participants grouped the items thematically as ‘it made sense to them’. Participants also rated each item on a 5-point scale for its ‘importance’ and ‘action feasibility’ over the ensuing six month period. The sorted and rated data was analyzed using multidimensional scaling and hierarchical cluster analyses which produced visual maps. In the third and final phase, the face-to-face Interpretation sessions, the concept maps were discussed and illuminated by participants collectively. Results Overall, 54 providers participated (emergency care 48%; primary care 52%. Participants brainstormed 196 items thought to be necessary for the implementation of an interactive HRA emphasizing psychosocial health. These were consolidated by the research team into 85 items. After sorting and rating, cluster analysis

  17. GPCALMA: implementation in Italian hospitals of a computer aided detection system for breast lesions by mammography examination.

    Science.gov (United States)

    Lauria, Adele

    2009-06-01

    We describe the implementation in several Italian hospitals of a computer aided detection (CAD) system, named GPCALMA (grid platform for a computer aided library in mammography), for the automatic search of lesions in X-ray mammographies. GPCALMA has been under development since 1999 by a community of physicists of the Italian National Institute for Nuclear Physics (INFN) in collaboration with radiologists. This CAD system was tested as a support to radiologists in reading mammographies. The main system components are: (i) the algorithms implemented for the analysis of digitized mammograms to recognize suspicious lesions, (ii) the database of digitized mammographic images, and (iii) the PC-based digitization and analysis workstation and its user interface. The distributed nature of data and resources and the prevalence of geographically remote users suggested the development of the system as a grid application: the design of this networked version is also reported. The paper describes the system architecture, the database of digitized mammographies, the clinical workstation and the medical applications carried out to characterize the system. A commercial CAD was evaluated in a comparison with GPCALMA by analysing the medical reports obtained with and without the two different CADs on the same dataset of images: with both CAD a statistically significant increase in sensitivity was obtained. The sensitivity in the detection of lesions obtained for microcalcification and masses was 96% and 80%, respectively. An analysis in terms of receiver operating characteristic (ROC) curve was performed for massive lesion searches, achieving an area under the ROC curve of A(z)=0.783+/-0.008. Results show that the GPCALMA CAD is ready to be used in the radiological practice, both for screening mammography and clinical studies. GPCALMA is a starting point for the development of other medical imaging applications such as the CAD for the search of pulmonary nodules, currently under

  18. Towards high performance computing for molecular structure prediction using IBM Cell Broadband Engine--an implementation perspective.

    Science.gov (United States)

    Krishnan, S P T; Liang, Sim Sze; Veeravalli, Bharadwaj

    2010-01-18

    RNA structure prediction problem is a computationally complex task, especially with pseudo-knots. The problem is well-studied in existing literature and predominantly uses highly coupled Dynamic Programming (DP) solutions. The problem scale and complexity become embarrassingly humungous to handle as sequence size increases. This makes the case for parallelization. Parallelization can be achieved by way of networked platforms (clusters, grids, etc) as well as using modern day multi-core chips. In this paper, we exploit the parallelism capabilities of the IBM Cell Broadband Engine to parallelize an existing Dynamic Programming (DP) algorithm for RNA secondary structure prediction. We design three different implementation strategies that exploit the inherent data, code and/or hybrid parallelism, referred to as C-Par, D-Par and H-Par, and analyze their performances. Our approach attempts to introduce parallelism in critical sections of the algorithm. We ran our experiments on SONY Play Station 3 (PS3), which is based on the IBM Cell chip. Our results suggest that introducing parallelism in DP algorithm allows it to easily handle longer sequences which otherwise would consume a large amount of time in single core computers. The results further demonstrate the speed-up gain achieved in exploiting the inherent parallelism in the problem and also elicits the advantages of using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA. The speed-up performance reported here is promising, especially when sequence length is long. To the best of our literature survey, the work reported in this paper is probably the first-of-its-kind to utilize the IBM Cell Broadband Engine (a heterogeneous multi-core chip) to implement a DP. The results also encourage using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA to predict its secondary structure.

  19. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J.M.; Ahnert, C. [Universidad Politecnica de Madrid (Spain)

    1995-12-31

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction.

  20. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    Science.gov (United States)

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  1. Design and Implementation of Fuzzy Logic Controller for Online Computer Controlled Steering System for Navigation of a Teleoperated Agricultural Vehicle

    Directory of Open Access Journals (Sweden)

    Prema Kannan

    2013-01-01

    Full Text Available This paper describes design, modeling, simulation, control, and implementation of teleoperated agricultural vehicle using intelligent technique. This vehicle can be used for ploughing, sowing, and soil moisture sensing. Online computer controlled steering system for a vehicle utilizing two independent drive wheels can be used to avoid obstacles and to improve the ability to resist external side forces. To control the steer angles of the nondriven wheels, the mathematical relationships between the drive wheel speeds and the steer angles of the nondriven wheels are used. A fuzzy logic controller is designed to change the drive wheel speeds and to achieve the desired steer angles. Online control of the agricultural vehicle is achieved from a remote place by means of Web Publishing Tool in LabVIEW. IR sensors in the vehicle are used to detect and to avoid the obstacles around. The developed steering angle control algorithm and fuzzy logic controller have been implemented in an agricultural vehicle which depicts that the vehicle performs its operation efficiently and reduces the manpower and becomes advantageous.

  2. Computer-implemented land use classification with pattern recognition software and ERTS digital data. [Mississippi coastal plains

    Science.gov (United States)

    Joyce, A. T.

    1974-01-01

    Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.

  3. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 2: Computational implementation and first results

    Science.gov (United States)

    Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina

    2017-11-01

    This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M < 6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered in this study, we present a different

  4. Implementation of a computer-aided detection tool for quantification of intracranial radiologic markers on brain CT images

    Science.gov (United States)

    Aghaei, Faranak; Ross, Stephen R.; Wang, Yunzhi; Wu, Dee H.; Cornwell, Benjamin O.; Ray, Bappaditya; Zheng, Bin

    2017-03-01

    Aneurysmal subarachnoid hemorrhage (aSAH) is a form of hemorrhagic stroke that affects middle-aged individuals and associated with significant morbidity and/or mortality especially those presenting with higher clinical and radiologic grades at the time of admission. Previous studies suggested that blood extravasated after aneurysmal rupture was a potentially clinical prognosis factor. But all such studies used qualitative scales to predict prognosis. The purpose of this study is to develop and test a new interactive computer-aided detection (CAD) tool to detect, segment and quantify brain hemorrhage and ventricular cerebrospinal fluid on non-contrasted brain CT images. First, CAD segments brain skull using a multilayer region growing algorithm with adaptively adjusted thresholds. Second, CAD assigns pixels inside the segmented brain region into one of three classes namely, normal brain tissue, blood and fluid. Third, to avoid "black-box" approach and increase accuracy in quantification of these two image markers using CT images with large noise variation in different cases, a graphic User Interface (GUI) was implemented and allows users to visually examine segmentation results. If a user likes to correct any errors (i.e., deleting clinically irrelevant blood or fluid regions, or fill in the holes inside the relevant blood or fluid regions), he/she can manually define the region and select a corresponding correction function. CAD will automatically perform correction and update the computed data. The new CAD tool is now being used in clinical and research settings to estimate various quantitatively radiological parameters/markers to determine radiological severity of aSAH at presentation and correlate the estimations with various homeostatic/metabolic derangements and predict clinical outcome.

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  6. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    Science.gov (United States)

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  7. The Anyang Esophageal Cancer Cohort Study: study design, implementation of fieldwork, and use of computer-aided survey system.

    Directory of Open Access Journals (Sweden)

    Fangfang Liu

    Full Text Available BACKGROUND: Human papillomavirus (HPV has been observed repeatedly in esophageal squamous cell carcinoma (ESCC tissues. However, the causal relationship between HPV infection and the onset of ESCC remains unknown. A large cohort study focusing on this topic is being carried out in rural Anyang, China. METHODOLOGY/PRINCIPAL FINDINGS: The Anyang Esophageal Cancer Cohort Study (AECCS is a population-based prospective endoscopic cohort study designed to investigate the association of HPV infection and ESCC. This paper provides information regarding the design and implementation of this study. In particular we describe the recruitment strategies and quality control procedures which have been put into place, and the custom designed computer-aided survey system (CASS used for this project. This system integrates barcode technology and unique identification numbers, and has been developed to facilitate real-time data management throughout the workflow using a wireless local area network. A total of 8,112 (75.3% of invited subjects participated in the baseline endoscopic examination; of those invited two years later to take part in the first cycle of follow-up, 91.9% have complied. CONCLUSIONS/SIGNIFICANCE: The AECCS study has high potential for evaluating the causal relationship between HPV infection and the occurrence of ESCC. The experience in setting up the AECCS may be beneficial for others planning to initiate similar epidemiological studies in developing countries.

  8. Towards the blackbox computation of magnetic exchange coupling parameters in polynuclear transition-metal complexes: theory, implementation, and application.

    Science.gov (United States)

    Phillips, Jordan J; Peralta, Juan E

    2013-05-07

    We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  14. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  3. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  4. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  6. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  7. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  8. Can Teachers in Primary Education Implement a Metacognitive Computer Programme for Word Problem Solving in Their Mathematics Classes?

    Science.gov (United States)

    de Kock, Willem D.; Harskamp, Egbert G.

    2014-01-01

    Teachers in primary education experience difficulties in teaching word problem solving in their mathematics classes. However, during controlled experiments with a metacognitive computer programme, students' problem-solving skills improved. Also without the supervision of researchers, metacognitive computer programmes can be beneficial in a natural…

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  10. USING OF THE CLIENT-SIDE COMPUTATIONS WHEN IMPLEMENTING WEB SERVICES FOR SPATIAL DATA PROCESSING, THE CASE STUDY OF JAVA WEB START TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    E. A. Panidi

    2015-01-01

    Full Text Available One of the key areas of Web geotechnology development is the implementation of software tools and systems, which are capable not only to display geospatial data in the Web interface, but also to provide functionality for processing and analysis directly in the browser window. Significant feature of current Web-based geo-spatial standards is the focusing on server-side data processing. Our study investigates possibilities and general ways of decentralized data processing implementation on the client side in the case of using the Java Web Start technology. The test software tools are developed that implements capability of transmitting the executable program code to the client computer through the Web interface and spatial data processing on the client side.

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  12. Design, Implementation and Optimization of Innovative Internet Access Networks, based on Fog Computing and Software Defined Networking

    OpenAIRE

    Iotti, Nicola

    2017-01-01

    1. DESIGN In this dissertation we introduce a new approach to Internet access networks in public spaces, such as Wi-Fi network commonly known as Hotspot, based on Fog Computing (or Edge Computing), Software Defined Networking (SDN) and the deployment of Virtual Machines (VM) and Linux containers, on the edge of the network. In this vision we deploy specialized network elements, called Fog Nodes, on the edge of the network, able to virtualize the physical infrastructure and expose APIs to e...

  13. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  14. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    Science.gov (United States)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  15. Implementation of the RS232 communication trainer using computers and the ATMEGA microcontroller for interface engineering Courses

    Science.gov (United States)

    Amelia, Afritha; Julham; Viyata Sundawa, Bakti; Pardede, Morlan; Sutrisno, Wiwinta; Rusdi, Muhammad

    2017-09-01

    RS232 of serial communication is the communication system in the computer and microcontroller. This communication was studied in Department of Electrical Engineering and Department of Computer Engineering and Informatics Department at Politeknik Negeri Medan. Recently, an application of simulation was installed on the computer which used for teaching and learning process. The drawback of this system is not useful for communication method between learner and trainer. Therefore, this study was created method of 10 stage to which divided into 7 stages and 3 major phases. It can be namely the analysis of potential problems and data collection, trainer design, and empirical testing and revision. After that, the trainer and module were tested in order to get feedback from the learner. The result showed that 70.10% of feedback which wide reasonable from the learner of questionnaire.

  16. Models and methods for design and implementation of computer based control and monitoring systems for production cells

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk

    through the implementation of two cell control systems for robot welding cells in production at Odense Steel Shipyard.It is concluded that cell control technology provides for increased performance in production systems, and that the Cell Control Engineering concept reduces the effort for providing high...

  17. Implementation of Web-Based Education in Egypt through Cloud Computing Technologies and Its Effect on Higher Education

    Science.gov (United States)

    El-Seoud, M. Samir Abou; El-Sofany, Hosam F.; Taj-Eddin, Islam A. T. F.; Nosseir, Ann; El-Khouly, Mahmoud M.

    2013-01-01

    The information technology educational programs at most universities in Egypt face many obstacles that can be overcome using technology enhanced learning. An open source Moodle eLearning platform has been implemented at many public and private universities in Egypt, as an aid to deliver e-content and to provide the institution with various…

  18. Using Innovative Tools to Teach Computer Application to Business Students--A Hawthorne Effect or Successful Implementation Here to Stay

    Science.gov (United States)

    Khan, Zeenath Reza

    2014-01-01

    A year after the primary study that tested the impact of introducing blended learning and guided discovery to help teach computer application to business students, this paper looks into the continued success of using guided discovery and blended learning with learning management system in and out of classrooms to enhance student learning.…

  19. An In-House Prototype for the Implementation of Computer-Based Extensive Reading in a Limited-Resource School

    Science.gov (United States)

    Mayora, Carlos A.; Nieves, Idami; Ojeda, Victor

    2014-01-01

    A variety of computer-based models of Extensive Reading have emerged in the last decade. Different Information and Communication Technologies online usually support these models. However, such innovations are not feasible in contexts where the digital breach limits the access to Internet. The purpose of this paper is to report a project in which…

  20. A Case Study of a Computer Assisted Learning Unit, "The Growth Curve of Microorganisms": Development, Implementation, and Evaluation.

    Science.gov (United States)

    Huppert, Jehuda; Lazarovitz, Reuven

    This three-part paper describes the development of a software program called "The Growth Curve of Microorganisms" for a tenth-grade biology class. Designed to improve students' cognitive skills, the program enables them to investigate, through computer simulations, the impact upon the growth curve of a population of three variables: temperature,…

  1. The evaluation of a national research plan to support the implementation of computers in education in The Netherlands (ED 310737)

    NARCIS (Netherlands)

    Moonen, J.C.M.M.; Collis, Betty; Koster, Klaas

    1990-01-01

    This paper describes the evolution of a national research plan for computers and education, an approach which was initiated in the Netherlands in 1983. Two phases can be recognized in the Dutch experience: one from 1984 until 1988 and one from 1989 until 1992. Building upon the experiences of the

  2. Development, Implementation, and Outcomes of an Equitable Computer Science After-School Program: Findings from Middle-School Students

    Science.gov (United States)

    Mouza, Chrystalla; Marzocchi, Alison; Pan, Yi-Cheng; Pollock, Lori

    2016-01-01

    Current policy efforts that seek to improve learning in science, technology, engineering, and mathematics (STEM) emphasize the importance of helping all students acquire concepts and tools from computer science that help them analyze and develop solutions to everyday problems. These goals have been generally described in the literature under the…

  3. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  4. Implementing the flipped classroom methodology to the subject "Applied computing" of the chemical engineering degree at the University of Barcelona

    Directory of Open Access Journals (Sweden)

    Montserrat Iborra

    2017-06-01

    Full Text Available This work is focus on implementation, development, documentation, analysis and assessment of flipped classroom methodology, by means of just in time teaching strategy, in a pilot group (1 of 6 of the subject “Applied Computing” of Chemical Engineering Undergraduate Degree of the University of Barcelona. The results show that this technique promotes self-learning, autonomy, time management as well as an increase in the effectiveness of classroom hours.

  5. A schema for knowledge representation and its implementation in a computer-aided design and manufacturing system

    Energy Technology Data Exchange (ETDEWEB)

    Tamir, D.E.

    1989-01-01

    Modularity in the design and implementation of expert systems relies upon cooperation among the expert systems and communication of knowledge between them. A prerequisite for an effective modular approach is some standard for knowledge representation to be used by the developers of the different modules. In this work the author presents a schema for knowledge representation, and apply this schema in the design of a rule-based expert system. He also implements a cooperative expert system using the proposed knowledge representation method. A knowledge representation schema is a formal specification of the internal, conceptual, and external components of a knowledge base, each specified in a separate schema. The internal schema defines the structure of a knowledge base, the conceptual schema defines the concepts, and the external schema formalizes the pragmatics of a knowledge base. The schema is the basis for standardizing knowledge representation systems and it is used in the various phases of design and specification of the knowledge base. A new model of knowledge representation based on a pattern recognition interpretation of implications is developed. This model implements the concept of linguistic variables and can, therefore, emulate human reasoning with linguistic imprecision. The test case for the proposed schema of knowledge representation is a system is a cooperative expert system composed of two expert systems. This system applies a pattern recognition interpretation of a generalized one-variable implication with linguistic variables.

  6. Procedures for gathering ground truth information for a supervised approach to a computer-implemented land cover classification of LANDSAT-acquired multispectral scanner data

    Science.gov (United States)

    Joyce, A. T.

    1978-01-01

    Procedures for gathering ground truth information for a supervised approach to a computer-implemented land cover classification of LANDSAT acquired multispectral scanner data are provided in a step by step manner. Criteria for determining size, number, uniformity, and predominant land cover of training sample sites are established. Suggestions are made for the organization and orientation of field team personnel, the procedures used in the field, and the format of the forms to be used. Estimates are made of the probable expenditures in time and costs. Examples of ground truth forms and definitions and criteria of major land cover categories are provided in appendixes.

  7. Fully Internally Contracted Multireference Configuration Interaction Theory Using Density Matrix Renormalization Group: A Reduced-Scaling Implementation Derived by Computer-Aided Tensor Factorization.

    Science.gov (United States)

    Saitow, Masaaki; Kurashige, Yuki; Yanai, Takeshi

    2015-11-10

    We present an extended implementation of the multireference configuration interaction (MRCI) method combined with the quantum-chemical density matrix renormalization group (DMRG). In the previous study, we introduced the combined theory, referred to as DMRGMRCI, as a method to calculate high-level dynamic electron correlation on top of the DMRG wave function that accounts for active-space (or strong) correlation using a large number of active orbitals. The DMRG-MRCI method is built on the full internal-contraction scheme for the compact reference treatment and on the cumulant approximation for the treatment of the four-particle rank reduced density matrix (4-RDM). The previous implementation achieved the MRCI calculations with the active space (24e,24o), which are deemed the record largest, whereas the inherent Nact 8 × N complexity of computation was found a hindrance to using further large active space. In this study, an extended optimization of the tensor contractions is developed by explicitly incorporating the rank reduction of the decomposed form of the cumulant-approximated 4-RDM into the factorization. It reduces the computational scaling (to Nact7 × N) as well as the cache-miss penalty associated with direct evaluation of complex cumulant reconstruction. The present scheme, however, faces the increased complexity of factorization patterns for optimally implementing the tensor contraction terms involving the decomposed 4-RDM objects. We address this complexity using the enhanced symbolic manipulation computer program for deriving and coding programmable equations. The new DMRG-MRCI implementation is applied to the determination of the stability of the iron(IV)-oxo porphyrin relative to the iron(V) electronic isomer (electromer) using the active space (29e,29o) (including four second d-shell orbitals of iron) with triple-ζ-quality atomic orbital basis sets. The DMRG-cu(4)-MRCI+Q model is shown to favor the triradicaloid iron(IV)-oxo state as the lowest

  8. Theoretical applied questions and their implementation in development of hierarchical computer control systems (CNC of facsimile copy machines for art engraving on minerals

    Directory of Open Access Journals (Sweden)

    Morozov V. I.

    2002-09-01

    Full Text Available The technological scheme implementing machine engraving on a mineral and facsimile transfer of the halftone image from the personal computer is offered. The dot (microstroke image is formed by a pulse system together with an electromechanical converter, such that the integral optical density of separate fragments equals optical density of the same fragments of the initial image.Structural construction and separate parameters of a two-level hierarchical control system are formalized.The description of the top level of the developed hierarchical control system is given.

  9. Integrated Academic Information Management Systems (IAIMS). Part III. Implementation of integrated information services. Library/computer center partnership.

    Science.gov (United States)

    Feng, C C; Weise, F O

    1988-03-01

    Information technologies are changing the traditional role of the library from that of a repository of information to that of an aggressive provider of information services utilizing electronic methods. In many cases, the library cannot realistically achieve this transformation independently but must work with the computer center to reach its objectives. Various models of the integration of libraries and computer centers are thus emerging. At the University of Maryland at Baltimore the Health Sciences Library and the Information Resources Management Division have developed a partnership based on functional relationships without changing the organizational structure. Strategic planning for an Integrated Academic Information Management System (IAIMS) acted as a catalyst in the process. The authors present the evolution of the partnership and discuss current projects being developed jointly by the two units.

  10. Implementation issues for mobile-wireless infrastructure and mobile health care computing devices for a hospital ward setting.

    Science.gov (United States)

    Heslop, Liza; Weeding, Stephen; Dawson, Linda; Fisher, Julie; Howard, Andrew

    2010-08-01

    mWard is a project whose purpose is to enhance existing clinical and administrative decision support and to consider mobile computers, connected via wireless network, for bringing clinical information to the point of care. The mWard project allowed a limited number of users to test and evaluate a selected range of mobile-wireless infrastructure and mobile health care computing devices at the neuroscience ward at Southern Health's Monash Medical Centre, Victoria, Australia. Before the project commenced, the ward had two PC's which were used as terminals by all ward-based staff and numerous multi-disciplinary staff who visited the ward each day. The first stage of the research, outlined in this paper, evaluates a selected range of mobile-wireless infrastructure.

  11. Power Spectrum Computation for an Arbitrary Phase Noise Using Middleton's Convolution Series: Implementation Guideline and Experimental Illustration.

    Science.gov (United States)

    Brochard, Pierre; Sudmeyer, Thomas; Schilt, Stephane

    2017-11-01

    In this paper, we revisit the convolution series initially introduced by Middleton several decades ago to determine the power spectrum (or spectral line shape) of a periodic signal from its phase noise power spectral density. This topic is of wide interest, as it has an important impact on many scientific areas that involve lasers and oscillators. We introduce a simple guideline that enables a fairly straightforward computation of the power spectrum corresponding to an arbitrary phase noise. We show the benefit of this approach on a computational point of view, and apply it to various types of experimental signals with different phase noise levels, showing a very good agreement with the experimental spectra. This approach also provides a qualitative and intuitive understanding of the power spectrum corresponding to different regimes of phase noise.

  12. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    Science.gov (United States)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  13. A Mammalian Retinal Ganglion Cell Implements a Neuronal Computation That Maximizes the SNR of Its Postsynaptic Currents.

    Science.gov (United States)

    Homann, Jan; Freed, Michael A

    2017-02-08

    Neurons perform computations by integrating excitatory and inhibitory synaptic inputs. Yet, it is rarely understood what computation is being performed, or how much excitation or inhibition this computation requires. Here we present evidence for a neuronal computation that maximizes the signal-to-noise power ratio (SNR). We recorded from OFF delta retinal ganglion cells in the guinea pig retina and monitored synaptic currents that were evoked by visual stimulation (flashing dark spots). These synaptic currents were mediated by a decrease in an outward current from inhibitory synapses (disinhibition) combined with an increase in an inward current from excitatory synapses. We found that the SNR of combined excitatory and disinhibitory currents was voltage sensitive, peaking at membrane potentials near resting potential. At the membrane potential for maximal SNR, the amplitude of each current, either excitatory or disinhibitory, was proportional to its SNR. Such proportionate scaling is the theoretically best strategy for combining excitatory and disinhibitory currents to maximize the SNR of their combined current. Moreover, as spot size or contrast changed, the amplitudes of excitatory and disinhibitory currents also changed but remained in proportion to their SNRs, indicating a dynamic rebalancing of excitatory and inhibitory currents to maximize SNR.SIGNIFICANCE STATEMENT We present evidence that the balance of excitatory and disinhibitory inputs to a type of retinal ganglion cell maximizes the signal-to-noise ratio power ratio (SNR) of its postsynaptic currents. This is significant because chemical synapses on a retinal ganglion cell require the probabilistic release of transmitter. Consequently, when the same visual stimulus is presented repeatedly, postsynaptic currents vary in amplitude. Thus, maximizing SNR may be a strategy for producing the most reliable signal possible given the inherent unreliability of synaptic transmission. Copyright © 2017 the authors

  14. Generating randomised virtualised scenarios for ethical hacking and computer security education: SecGen implementation and deployment

    OpenAIRE

    Schreuders, ZC; Ardern, L

    2015-01-01

    Computer security students benefit from having hands-on experience with hacking tools and with access to vulnerable systems that they can attack and defend. However, vulnerable VMs are static; once they have been exploited by a student there is no repeatable challenge as the vulnerable boxes never change. A new novel solution, SecGen, has been created and deployed. SecGen solves the issue by creating vulnerable machines with randomised vulnerabilities and services, with constraints that ensur...

  15. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Science.gov (United States)

    2010-04-14

    Hindawi Publishing Corporation Advances in Fuzzy Systems Volume 2012, Article ID 343174, 12 pages doi:10.1155/2012/343174...for soft computing is the human brain.” A fuzzy inference system [9] is developed to predict location and depth of the crack of a cracked...cantilever beam structure in a close proximity to the real results. A hybrid artificial intelligence technique with fuzzy - neuro controller is used to

  16. Implementation of a Pseudo-Bending Seismic Travel-Time Calculator in a Distributed Parallel Computing Environment

    Science.gov (United States)

    2008-09-01

    of a given phase must interact ( Moho , 410, 660, etc.). We specify additional interfaces at levels within the Earth model that could potentially... Moho and other interfaces in the mantle down to, but not including, the 660-km discontinuity, thereby constraining the computed ray to bottom somewhere...REFLECTION M100 DIFFRACTION MOHO REFLECTION MOHO DIFFRACTION MOHO REFRACTION M150 REFRACTION M175 REFRACTION M200 REFRACTION M410 REFRACTION M660

  17. Computer-aided design and fabrication of wire-wrap (trademark) type circuit boards: A new symbolism and its implementation

    Science.gov (United States)

    Montminy, B.; Carbonneau, R.; Laflamme, A.; Lessard, M.; Blanchard, A.

    1982-02-01

    We define here an encoding symbolism that permits, with a minimum of statements, a thorough description of the numerous interconnections of complex electronic circuitry. This symbolism has been integrated at the Defence Research Establishment Valcartier in a computer-aided method that considerably eases the passage between an engineering electronic schematic and the related interconnection matrix required for Wire-Wrap (Trademark) hardware. Electronics prototyping has been dramatically speeded up with this technique because of the time savings in the preparatory reduction of interconnection data.

  18. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    Science.gov (United States)

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean

  19. Teacher Conceptions and Approaches Associated with an Immersive Instructional Implementation of Computer-Based Models and Assessment in a Secondary Chemistry Classroom

    Science.gov (United States)

    Waight, Noemi; Liu, Xiufeng; Gregorius, Roberto Ma.; Smith, Erica; Park, Mihwa

    2014-02-01

    This paper reports on a case study of an immersive and integrated multi-instructional approach (namely computer-based model introduction and connection with content; facilitation of individual student exploration guided by exploratory worksheet; use of associated differentiated labs and use of model-based assessments) in the implementation of coupled computer-based models and assessment in a high-school chemistry classroom. Data collection included in-depth teacher interviews, classroom observations, student interviews and researcher notes. Teacher conceptions highlighted the role of models as tools; the benefits of abstract portrayal via visualizations; appropriate enactment of model implementation; concerns with student learning and issues with time. The case study revealed numerous challenges reconciling macro, submicro and symbolic phenomena with the NetLogo model. Nonetheless, the effort exhibited by the teacher provided a platform to support the evolution of practice over time. Students' reactions reflected a continuum of confusion and benefits which were directly related to their background knowledge and experiences with instructional modes. The findings have implications for the role of teacher knowledge of models, the modeling process and pedagogical content knowledge; the continuum of student knowledge as novice users and the role of visual literacy in model decoding, comprehension and translation.

  20. Implementation of a Computational Model for Information Processing and Signaling from a Biological Neural Network of Neostriatum Nucleus

    Directory of Open Access Journals (Sweden)

    C. Sanchez-Vazquez

    2014-06-01

    Full Text Available Recently, several mathematical models have been developed to study and explain the way information is processed in the brain. The models published account for a myriad of perspectives from single neuron segments to neural networks, and lately, with the use of supercomputing facilities, to the study of whole environments of nuclei interacting for massive stimuli and processing. Some of the most complex neural structures -and also most studied- are basal ganglia nuclei in the brain; amongst which we can find the Neostriatum. Currently, just a few papers about high scale biological-based computational modeling of this region have been published. It has been demonstrated that the Basal Ganglia region contains functions related to learning and decision making based on rules of the action-selection type, which are of particular interest for the machine autonomous-learning field. This knowledge could be clearly transferred between areas of research. The present work proposes a model of information processing, by integrating knowledge generated from widely accepted experiments in both morphology and biophysics, through integrating theories such as the compartmental electrical model, the Rall’s cable equation, and the Hodking-Huxley particle potential regulations, among others. Additionally, the leaky integrator framework is incorporated in an adapted function. This was accomplished through a computational environment prepared for high scale neural simulation which delivers data output equivalent to that from the original model, and that can not only be analyzed as a Bayesian problem, but also successfully compared to the biological specimen.

  1. Implementation of a damage model in a finite element program for computation of structures under dynamic loading

    Directory of Open Access Journals (Sweden)

    Nasserdine Oudni

    2016-01-01

    Full Text Available This work is a numerical simulation of nonlinear problems of the damage process and fracture of quasi-brittle materials especially concrete. In this study, we model the macroscopic behavior of concrete material, taking into account the phenomenon of damage. J. Mazars model whose principle is based on damage mechanics has been implemented in a finite element program written Fortran 90, it takes into account the dissymmetry of concrete behavior in tension and in compression, this model takes into account the cracking tensile and rupture in compression. It is a model that is commonly used for static and pseudo-static systems, but in this work, it was used in the dynamic case.

  2. FORMALIZATION OF THE ACCOUNTING VALUABLE MEMES METHOD FOR THE PORTFOLIO OF ORGANIZATION DEVELOPMENT AND INFORMATION COMPUTER TOOLS FOR ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Serhii D. Bushuiev

    2017-12-01

    Full Text Available The current state of project management has been steadily demonstrating a trend toward increasing the role of flexible "soft" management practices. A method for preparing solutions for the formation of a value-oriented portfolio based on a comparison of the level of internal organizational values is proposed. The method formalizes the methodological foundations of value-oriented portfolio management in the development of organizations in the form of approaches, basic terms and technological methods with ICT using, which makes it possible to use them as an integral knowledge system for creating an automated system for managing portfolios of organizations. The result of the study is the deepening of the theoretical provisions for managing the development of organizations through the implementation of a value-oriented portfolio of projects, which allowed formalize the method of recording value memes in the development portfolios of organizations, to disclose its logic, essence, objective basis and rules.

  3. Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.

    Science.gov (United States)

    Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan

    2017-06-01

    Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A ray casting method for the computation of the area of feasible solutions for multicomponent systems: Theory, applications and FACPACK-implementation.

    Science.gov (United States)

    Sawall, Mathias; Neymeyr, Klaus

    2017-04-01

    Multivariate curve resolution methods suffer from the non-uniqueness of the solutions. The set of possible nonnegative solutions can be represented by the so-called Area of Feasible Solutions (AFS). The AFS for an s-component system is a bounded (s-1)-dimensional set. The numerical computation and the geometric construction of the AFS is well understood for two- and three-component systems but gets much more complicated for systems with four or even more components. This work introduces a new and robust ray casting method for the computation of the AFS for general s-component systems. The algorithm shoots rays from the origin and records the intersections of these rays with the AFS. The ray casting method is computationally fast, stable with respect to noise and is able to detect the various possible shapes of the AFS sets. The easily implementable algorithm is tested for various three- and four-component data sets. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Computational Implementation of a Thermodynamically Based Work Potential Model For Progressive Microdamage and Transverse Cracking in Fiber-Reinforced Laminates

    Science.gov (United States)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.

    2012-01-01

    A continuum-level, dual internal state variable, thermodynamically based, work potential model, Schapery Theory, is used capture the effects of two matrix damage mechanisms in a fiber-reinforced laminated composite: microdamage and transverse cracking. Matrix microdamage accrues primarily in the form of shear microcracks between the fibers of the composite. Whereas, larger transverse matrix cracks typically span the thickness of a lamina and run parallel to the fibers. Schapery Theory uses the energy potential required to advance structural changes, associated with the damage mechanisms, to govern damage growth through a set of internal state variables. These state variables are used to quantify the stiffness degradation resulting from damage growth. The transverse and shear stiffness of the lamina are related to the internal state variables through a set of measurable damage functions. Additionally, the damage variables for a given strain state can be calculated from a set of evolution equations. These evolution equations and damage functions are implemented into the finite element method and used to govern the constitutive response of the material points in the model. Additionally, an axial failure criterion is included in the model. The response of a center-notched, buffer strip-stiffened panel subjected to uniaxial tension is investigated and results are compared to experiment.

  6. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: How we do it

    Energy Technology Data Exchange (ETDEWEB)

    Schlorhaufer, C., E-mail: Schlorhaufer.Celia@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Behrends, M., E-mail: behrends.marianne@mh-hannover.de [Peter L. Reichertz Department of Medical Informatics, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Diekhaus, G., E-mail: Diekhaus.Gesche@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Keberle, M., E-mail: m.keberle@bk-paderborn.de [Department of Diagnostic and Interventional Radiology, Brüderkrankenhaus St. Josef Paderborn, Husener Str. 46, 33098 Paderborn (Germany); Weidemann, J., E-mail: Weidemann.Juergen@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany)

    2012-12-15

    Purpose: Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Materials and methods: Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. Results: The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Conclusions: Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools.

  7. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: how we do it.

    Science.gov (United States)

    Schlorhaufer, C; Behrends, M; Diekhaus, G; Keberle, M; Weidemann, J

    2012-12-01

    Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    Science.gov (United States)

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  9. Clinical implementation of an emergency department coronary computed tomographic angiography protocol for triage of patients with suspected acute coronary syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Ghoshhajra, Brian B.; Staziaki, Pedro V.; Vadvala, Harshna; Kim, Phillip; Meyersohn, Nandini M.; Janjua, Sumbal A.; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Takx, Richard A.P. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Neilan, Tomas G.; Francis, Sanjeev [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Bittner, Daniel [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nuernberg (FAU), Department of Medicine 2 - Cardiology, Erlangen (Germany); Mayrhofer, Thomas [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Stralsund University of Applied Sciences, School of Business Studies, Stralsund (Germany); Greenwald, Jeffrey L. [Massachusetts General Hospital and Harvard Medical School, Department of Medicine, Boston, MA (United States); Truong, Quyhn A. [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); Weill Cornell College of Medicine, Department of Radiology, New York, NY (United States); Abbara, Suhny [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology (Cardiovascular Imaging) and Division of Cardiology, Boston, MA (United States); UT Southwestern Medical Center, Department Cardiothoracic Imaging, Dallas, TX (United States); Brown, David F.M.; Nagurney, John T. [Massachusetts General Hospital and Harvard Medical School, Department of Emergency Medicine, Boston, MA (United States); Januzzi, James L. [Massachusetts General Hospital and Harvard Medical School, Division of Cardiology, Boston, MA (United States); Collaboration: MGH Emergency Cardiac CTA Program Contributors

    2017-07-15

    To evaluate the efficiency and safety of emergency department (ED) coronary computed tomography angiography (CTA) during a 3-year clinical experience. Single-center registry of coronary CTA in consecutive ED patients with suspicion of acute coronary syndrome (ACS). The primary outcome was efficiency of coronary CTA defined as the length of hospitalization. Secondary endpoints of safety were defined as the rate of downstream testing, normalcy rates of invasive coronary angiography (ICA), absence of missed ACS, and major adverse cardiac events (MACE) during follow-up, and index radiation exposure. One thousand twenty two consecutive patients were referred for clinical coronary CTA with suspicion of ACS. Overall, median time to discharge home was 10.5 (5.7-24.1) hours. Patient disposition was 42.7 % direct discharge from the ED, 43.2 % discharge from emergency unit, and 14.1 % hospital admission. ACS rate during index hospitalization was 9.1 %. One hundred ninety two patients underwent additional diagnostic imaging and 77 underwent ICA. The positive predictive value of CTA compared to ICA was 78.9 % (95 %-CI 68.1-87.5 %). Median CT radiation exposure was 4.0 (2.5-5.8) mSv. No ACS was missed; MACE at follow-up after negative CTA was 0.2 %. Coronary CTA in an experienced tertiary care setting allows for efficient and safe management of patients with suspicion for ACS. (orig.)

  10. Clinical implementation of an objective computer-aided protocol for intervention in intra-treatment correction using electronic portal imaging.

    Science.gov (United States)

    Van den Heuvel, F; De Neve, W; Verellen, D; Coghe, M; Coen, V; Storme, G

    1995-06-01

    In order to test the feasibility of a protocol for intra-fractional adjustment of the patient position, during radiation therapy treatment in the pelvic region, a two-fold study is carried out. The protocol involves an objective quantitative measurement of the error in positioning starting from the comparison of a portal image with a reference image. The first part of the study applies the protocol to determine the efficacy of adjustment using subjective determination of the positioning errors by a clinician by measuring the residual errors after adjustment. A group of 13 patients was followed extensively throughout their treatment, analyzing 240 fields. In the second part the measurement itself determines the extent of readjustment of the position. Throughout the procedure elapsed time is measured to determine the extra time involved in using this procedure. For this part a group of 21 patients was followed yielding statistics on 218 fields. Using this computer aided protocol it is shown that systematic as well as random errors can be reduced to standard deviations of the order of 1 mm. The price to pay however is additional treatment time up to 58% of the treatment time without the protocol. Time analysis shows that the largest part of the added time is spent on the readjustment of the patients' position adding a mean of 37% of time to the treatment of one field. This is despite the fact that the readjustment was performed using a remote couch controller. Finally a statistical analysis shows that it is possible to select patients benefiting from the use of such a protocol after a limited number of fractions.

  11. Fish and chips: implementation of a neural network model into computer chips to maximize swimming efficiency in autonomous underwater vehicles.

    Science.gov (United States)

    Blake, R W; Ng, H; Chan, K H S; Li, J

    2008-09-01

    Recent developments in the design and propulsion of biomimetic autonomous underwater vehicles (AUVs) have focused on boxfish as models (e.g. Deng and Avadhanula 2005 Biomimetic micro underwater vehicle with oscillating fin propulsion: system design and force measurement Proc. 2005 IEEE Int. Conf. Robot. Auto. (Barcelona, Spain) pp 3312-7). Whilst such vehicles have many potential advantages in operating in complex environments (e.g. high manoeuvrability and stability), limited battery life and payload capacity are likely functional disadvantages. Boxfish employ undulatory median and paired fins during routine swimming which are characterized by high hydromechanical Froude efficiencies (approximately 0.9) at low forward speeds. Current boxfish-inspired vehicles are propelled by a low aspect ratio, 'plate-like' caudal fin (ostraciiform tail) which can be shown to operate at a relatively low maximum Froude efficiency (approximately 0.5) and is mainly employed as a rudder for steering and in rapid swimming bouts (e.g. escape responses). Given this and the fact that bioinspired engineering designs are not obligated to wholly duplicate a biological model, computer chips were developed using a multilayer perception neural network model of undulatory fin propulsion in the knifefish Xenomystus nigri that would potentially allow an AUV to achieve high optimum values of propulsive efficiency at any given forward velocity, giving a minimum energy drain on the battery. We envisage that externally monitored information on flow velocity (sensory system) would be conveyed to the chips residing in the vehicle's control unit, which in turn would signal the locomotor unit to adopt kinematics (e.g. fin frequency, amplitude) associated with optimal propulsion efficiency. Power savings could protract vehicle operational life and/or provide more power to other functions (e.g. communications).

  12. Reflections on the Implementation of Low-Dose Computed Tomography Screening in Individuals at High Risk of Lung Cancer in Spain.

    Science.gov (United States)

    Garrido, Pilar; Sánchez, Marcelo; Belda Sanchis, José; Moreno Mata, Nicolás; Artal, Ángel; Gayete, Ángel; Matilla González, José María; Galbis Caravajal, José Marcelo; Isla, Dolores; Paz-Ares, Luis; Seijo, Luis M

    2017-10-01

    Lung cancer (LC) is a major public health issue. Despite recent advances in treatment, primary prevention and early diagnosis are key to reducing the incidence and mortality of this disease. A recent clinical trial demonstrated the efficacy of selective screening by low-dose computed tomography (LDCT) in reducing the risk of both lung cancer mortality and all-cause mortality in high-risk individuals. This article contains the reflections of an expert group on the use of LDCT for early diagnosis of LC in high-risk individuals, and how to evaluate its implementation in Spain. The expert group was set up by the Spanish Society of Pulmonology and Thoracic Surgery (SEPAR), the Spanish Society of Thoracic Surgery (SECT), the Spanish Society of Radiology (SERAM) and the Spanish Society of Medical Oncology (SEOM). Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Using an adaptive expertise lens to understand the quality of teachers' classroom implementation of computer-supported complex systems curricula in high school science

    Science.gov (United States)

    Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric

    2015-05-01

    Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.

  14. GPU-based implementation of an accelerated SR-NLUT based on N-point one-dimensional sub-principal fringe patterns in computer-generated holograms

    Directory of Open Access Journals (Sweden)

    Hee-Min Choi

    2015-06-01

    Full Text Available An accelerated spatial redundancy-based novel-look-up-table (A-SR-NLUT method based on a new concept of the N-point one-dimensional sub-principal fringe pattern (N-point1-D sub-PFP is implemented on a graphics processing unit (GPU for fast calculation of computer-generated holograms (CGHs of three-dimensional (3-Dobjects. Since the proposed method can generate the N-point two-dimensional (2-D PFPs for CGH calculation from the pre-stored N-point 1-D PFPs, the loading time of the N-point PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confirm that the average calculation time for one-object point has been reduced by 49.6% and 55.4% compared to those of the conventional 2-D SR-NLUT methods for each case of the 2-point and 3-point SR maps, respectively.

  15. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...... cybernetics and Maturana and Varela’s theory of autopoiesis, which are both erroneously taken to support info-computationalism....

  16. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 2: Computational implementation and first results

    Directory of Open Access Journals (Sweden)

    L. Peruzza

    2017-11-01

    Full Text Available This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017 and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude–scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014. Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M  >  6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M  <  6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered

  17. Implementation on GPU-based acceleration of the m-line reconstruction algorithm for circle-plus-line trajectory computed tomography

    Science.gov (United States)

    Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei

    2016-10-01

    The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The

  18. Comparative yield of positive brain Computed Tomography after implementing the NICE or SIGN head injury guidelines in two equivalent urban populations

    Energy Technology Data Exchange (ETDEWEB)

    Summerfield, R., E-mail: ruth.summerfield@uhns.nhs.u [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom); Macduff, R. [Glasgow Royal Infirmary, 84 Castle Street, Glasgow G4 0SF (United Kingdom); Davis, R. [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom); Sambrook, M. [Glasgow Royal Infirmary, 84 Castle Street, Glasgow G4 0SF (United Kingdom); Britton, I. [Medical Imaging, University Hospital of North Staffordshire, City General Hospital, Stoke-on-Trent, Staffordshire ST4 6QG (United Kingdom)

    2011-04-15

    Aims: To compare the yield of positive computed tomography (CT) brain examinations after the implementation of the National Institute for Clinical Excellence (NICE) or the Scottish Intercollegiate Guidance Network (SIGN) guidelines, in comparable urban populations in two teaching hospitals in England and Scotland. Materials and methods: Four hundred consecutive patients presenting at each location following a head injury who underwent a CT examination of the head according to the locally implemented guidelines were compared. Similar matched populations were compared for indication and yield. Yield was measured according to (1) positive CT findings of the sequelae of trauma and (2) intervention required with anaesthetic or intensive care unit (ICU) support, or neurosurgery. Results: The mean ages of patients at the English and Scottish centres were 49.9 and 49.2 years, respectively. Sex distribution was 64.1% male and 66.4% male respectively. Comparative yield was 23.8 and 26.5% for positive brain scans, 3 and 2.75% for anaesthetic support, and 3.75 and 2.5% for neurosurgical intervention. Glasgow Coma Score (GCS) <13 (NICE) and GCS {<=}12 and radiological or clinical evidence of skull fracture (SIGN) demonstrated the greatest statistical association with a positive CT examination. Conclusion: In a teaching hospital setting, there is no significant difference in the yield between the NICE and SIGN guidelines. Both meet the SIGN standard of >10% yield of positive scans. The choice of guideline to follow should be at the discretion of the local institution. The indications GCS <13 and clinical or radiological evidence of a skull fracture are highly predictive of intracranial pathology, and their presence should be an absolute indicator for fast-tracking the management of the patient.

  19. Staff experiences within the implementation of computer-based nursing records in residential aged care facilities: a systematic review and synthesis of qualitative research.

    Science.gov (United States)

    Meißner, Anne; Schnepp, Wilfried

    2014-06-20

    Since the introduction of electronic nursing documentation systems, its implementation in recent years has increased rapidly in Germany. The objectives of such systems are to save time, to improve information handling and to improve quality. To integrate IT in the daily working processes, the employee is the pivotal element. Therefore it is important to understand nurses' experience with IT implementation. At present the literature shows a lack of understanding exploring staff experiences within the implementation process. A systematic review and meta-ethnographic synthesis of primary studies using qualitative methods was conducted in PubMed, CINAHL, and Cochrane. It adheres to the principles of the PRISMA statement. The studies were original, peer-reviewed articles from 2000 to 2013, focusing on computer-based nursing documentation in Residential Aged Care Facilities. The use of IT requires a different form of information processing. Some experience this new form of information processing as a benefit while others do not. The latter find it more difficult to enter data and this result in poor clinical documentation. Improvement in the quality of residents' records leads to an overall improvement in the quality of care. However, if the quality of those records is poor, some residents do not receive the necessary care. Furthermore, the length of time necessary to complete the documentation is a prominent theme within that process. Those who are more efficient with the electronic documentation demonstrate improved time management. For those who are less efficient with electronic documentation the information processing is perceived as time consuming. Normally, it is possible to experience benefits when using IT, but this depends on either promoting or hindering factors, e.g. ease of use and ability to use it, equipment availability and technical functionality, as well as attitude. In summary, the findings showed that members of staff experience IT as a benefit when

  20. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    Science.gov (United States)

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  1. The Effect of Mobile Tablet Computer (iPad) Implementation on Graduate Medical Education at a Multi-specialty Residency Institution.

    Science.gov (United States)

    Dupaix, John; Chen, John J; Chun, Maria Bj; Belcher, Gary F; Cheng, Yongjun; Atkinson, Robert

    2016-07-01

    Use of mobile tablet computers (MTCs) in residency education has grown. The objective of this study was to investigate the impact of MTCs on multiple specialties' residency training and identify MTC adoption impediments. To our knowledge, this current project is one of the first multispecialty studies of MTC implementation. A prospective cohort study was formulated. In June 2012 iPad2s were issued to all residents after completion of privacy/confidentiality agreements and a mandatory hard-copy pre-survey regarding four domains of usage (general, self-directed learning, clinical duties, and patient education). Residents who received iPads previously were excluded. A voluntary post-survey was conducted online in June 2013. One-hundred eighty-five subjects completed pre-survey and 107 completed post-survey (58% overall response rate). Eighty-six pre- and post-surveys were linked (response rate of 46%). There was a significant increase in residents accessing patient information/records and charting electronically (26.9% to 79.1%; Peducation, clinical practice, and patient education. The survey tool may be useful in collecting data on MTC use by other graduate medical education programs.

  2. Perceptions of clinicians and staff about the use of digital technology in primary care: qualitative interviews prior to implementation of a computer-facilitated 5As intervention.

    Science.gov (United States)

    Nápoles, Anna María; Appelle, Nicole; Kalkhoran, Sara; Vijayaraghavan, Maya; Alvarado, Nicholas; Satterfield, Jason

    2016-04-19

    Digital health interventions using hybrid delivery models may offer efficient alternatives to traditional behavioral counseling by addressing obstacles of time, resources, and knowledge. Using a computer-facilitated 5As (ask, advise, assess, assist, arrange) model as an example (CF5As), we aimed to identify factors from the perspectives of primary care providers and clinical staff that were likely to influence introduction of digital technology and a CF5As smoking cessation counseling intervention. In the CF5As model, patients self-administer a tablet intervention that provides 5As smoking cessation counseling, produces patient and provider handouts recommending next steps, and is followed by a patient-provider encounter to reinforce key cessation messages, provide assistance, and arrange follow-up. Semi-structured in-person interviews of administrative and clinical staff and primary care providers from three primary care clinics. Thirty-five interviews were completed (12 administrative staff, ten clinical staff, and 13 primary care providers). Twelve were from an academic internal medicine practice, 12 from a public hospital academic general medicine clinic, and 11 from a public hospital HIV clinic. Most were women (91 %); mean age (SD) was 42 years (11.1). Perceived usefulness of the CF5As focused on its relevance for various health behavior counseling purposes, potential gains in counseling efficiency, confidentiality of data collection, occupying patients while waiting, and serving as a cue to action. Perceived ease of use was viewed to depend on the ability to accommodate: clinic workflow; heavy patient volumes; and patient characterisitics, e.g., low literacy. Social norms potentially affecting implementation included beliefs in the promise/burden of technology, priority of smoking cessation counseling relative to other patient needs, and perception of CF5As as just "one more thing to do" in an overburdened system. The most frequently cited facilitating

  3. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  4. Computational Toxicology as Implemented by the U.S. EPA: Providing High Throughput Decision Support Tools for Screening and Assessing Chemical Exposure, Hazard and Risk

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environ...

  5. Digital Bridge or Digital Divide? A Case Study Review of the Implementation of the "Computers for Pupils Programme" in a Birmingham Secondary School

    Science.gov (United States)

    Morris, Jonathan Padraig

    2011-01-01

    Attempts to bridge the Digital Divide have seen vast investment in Information Communication Technology in schools. In the United Kingdom, the Computers for Pupils initiative has invested 60 million British Pounds of funds to help some of the most disadvantaged secondary school pupils by putting a computer in their home. This paper charts and…

  6. Diabetes Patients' Experiences With the Implementation of Insulin Therapy and Their Perceptions of Computer-Assisted Self-Management Systems for Insulin Therapy

    NARCIS (Netherlands)

    Simon, Airin Cr; Gude, Wouter T.; Holleman, Frits; Hoekstra, Joost Bl; Peek, Niels

    2014-01-01

    Background: Computer-assisted decision support is an emerging modality to assist patients with type 2 diabetes mellitus (T2DM) in insulin self-titration (ie, self-adjusting insulin dose according to daily blood glucose levels). Computer-assisted insulin self-titration systems mainly focus on helping

  7. Comparison of Computer Based Instruction to Behavior Skills Training for Teaching Staff Implementation of Discrete-Trial Instruction with an Adult with Autism

    Science.gov (United States)

    Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…

  8. Computer-Based Reading Programs: A Preliminary Investigation of Two Parent Implemented Programs with Students At-Risk for Reading Failure

    Science.gov (United States)

    Pindiprolu, Sekhar S.; Forbush, David

    2009-01-01

    In 2000, National Reading Panelists (NRP) reported that computer delivered reading instruction has potential for promoting the reading skills of students at-risk for reading failure. However, panelists also noted a scarcity of data present in the literature on the effects of computer-based reading instruction. This preliminary investigation…

  9. Implementing Specification Freedoms.

    Science.gov (United States)

    1983-04-01

    Sheila Coyazo for editing the final draft. Portions of this report appeared in a paper of the same name published in Science of Computer Programming [21...University of Edinburgh, Technical Report DAI 54, 1978. 21. Feather, M. S., and P. E. London, "Implementing specification freedoms," Science of Computer Programming 2

  10. Design, implementation, and testing of a software interface between the AN/SPS-65(V)1 radar and the SRC-6E reconfigurable computer

    OpenAIRE

    Guthrie, Thomas G.

    2005-01-01

    Approved for public release, distribution is unlimited This thesis outlines the development, programming, and testing a logical interface between a radar system, the AN/SPS-65(V)1, and a general-purpose reconfigurable computing platform, the SRC Computer, Inc. model, the SRC-6E. To confirm the proper operation of the interface and associated subcomponents, software was developed to perform basic radar signal processing. The interface, as proven by the signal processing results, accurately ...

  11. Cognitive Computing for Security.

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rothganger, Fredrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marinella, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Brian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Warrender, Christina E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mickel, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

  12. An Implementation of Real-Time Phased Array Radar Fundamental Functions on a DSP-Focused, High-Performance, Embedded Computing Platform

    Directory of Open Access Journals (Sweden)

    Xining Yu

    2016-09-01

    Full Text Available This paper investigates the feasibility of a backend design for real-time, multiple-channel processing digital phased array system, particularly for high-performance embedded computing platforms constructed of general purpose digital signal processors. First, we obtained the lab-scale backend performance benchmark from simulating beamforming, pulse compression, and Doppler filtering based on a Micro Telecom Computing Architecture (MTCA chassis using the Serial RapidIO protocol in backplane communication. Next, a field-scale demonstrator of a multifunctional phased array radar is emulated by using the similar configuration. Interestingly, the performance of a barebones design is compared to that of emerging tools that systematically take advantage of parallelism and multicore capabilities, including the Open Computing Language.

  13. Quantum Computer Emulator

    OpenAIRE

    De Raedt, H. A.; Hams, A. H.; Michielsen, K. F. L.; De Raedt, K.

    2000-01-01

    We describe a quantum computer emulator for a generic, general purpose quantum computer. This emulator consists of a simulator of the physical realization of the quantum computer and a graphical user interface to program and control the simulator. We illustrate the use of the quantum computer emulator through various implementations of the Deutsch-Jozsa and Grover's database search algorithm.

  14. Effectiveness of ESL Students' Performance by Computational Assessment and Role of Reading Strategies in Courseware-Implemented Business Translation Tasks

    Science.gov (United States)

    Tsai, Shu-Chiao

    2017-01-01

    This study reports on investigating students' English translation performance and their use of reading strategies in an elective English writing course offered to senior students of English as a Foreign Language for 100 minutes per week for 12 weeks. A courseware-implemented instruction combined with a task-based learning approach was adopted.…

  15. Implementation is crucial but must be neurobiologically grounded. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    Science.gov (United States)

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L.

    2014-09-01

    From the perspective of language, Fitch's [1] claim that theories of cognitive computation should not be separated from those of implementation surely deserves applauding. Recent developments in the Cognitive Neuroscience of Language, leading to the new field of the Neurobiology of Language [2-4], emphasise precisely this point: rather than attempting to simply map cognitive theories of language onto the brain, we should aspire to understand how the brain implements language. This perspective resonates with many of the points raised by Fitch in his review, such as the discussion of unhelpful dichotomies (e.g., Nature versus Nurture). Cognitive dichotomies and debates have repeatedly turned out to be of limited usefulness when it comes to understanding language in the brain. The famous modularity-versus-interactivity and dual route-versus-connectionist debates are cases in point: in spite of hundreds of experiments using neuroimaging (or other techniques), or the construction of myriad computer models, little progress has been made in their resolution. This suggests that dichotomies proposed at a purely cognitive (or computational) level without consideration of biological grounding appear to be "asking the wrong questions" about the neurobiology of language. In accordance with these developments, several recent proposals explicitly consider neurobiological constraints while seeking to explain language processing at a cognitive level (e.g. [5-7]).

  16. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    Science.gov (United States)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  17. Building Capacity Through Hands-on Computational Internships to Assure Reproducible Results and Implementation of Digital Documentation in the ICERT REU Program

    Science.gov (United States)

    Gomez, R.; Gentle, J.

    2015-12-01

    Modern data pipelines and computational processes require that meticulous methodologies be applied in order to insure that the source data, algorithms, and results are properly curated, managed and retained while remaining discoverable, accessible, and reproducible. Given the complexity of understanding the scientific problem domain being researched, combined with the overhead of learning to use advanced computing technologies, it becomes paramount that the next generation of scientists and researchers learn to embrace best-practices. The Integrative Computational Education and Research Traineeship (ICERT) is a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at the Texas Advanced Computing Center (TACC). During Summer 2015, two ICERT interns joined the 3DDY project. 3DDY converts geospatial datasets into file types that can take advantage of new formats, such as natural user interfaces, interactive visualization, and 3D printing. Mentored by TACC researchers for ten weeks, students with no previous background in computational science learned to use scripts to build the first prototype of the 3DDY application, and leveraged Wrangler, the newest high performance computing (HPC) resource at TACC. Test datasets for quadrangles in central Texas were used to assemble the 3DDY workflow and code. Test files were successfully converted into a stereo lithographic (STL) format, which is amenable for use with a 3D printers. Test files and the scripts were documented and shared using the Figshare site while metadata was documented for the 3DDY application using OntoSoft. These efforts validated a straightforward set of workflows to transform geospatial data and established the first prototype version of 3DDY. Adding the data and software management procedures helped students realize a broader set of tangible results (e.g. Figshare entries), better document their progress and the final state of their work for the research group and community

  18. Analytic derivative couplings and first-principles exciton/phonon coupling constants for an ab initio Frenkel-Davydov exciton model: Theory, implementation, and application to compute triplet exciton mobility parameters for crystalline tetracene

    Science.gov (United States)

    Morrison, Adrian F.; Herbert, John M.

    2017-06-01

    Recently, we introduced an ab initio version of the Frenkel-Davydov exciton model for computing excited-state properties of molecular crystals and aggregates. Within this model, supersystem excited states are approximated as linear combinations of excitations localized on molecular sites, and the electronic Hamiltonian is constructed and diagonalized in a direct-product basis of non-orthogonal configuration state functions computed for isolated fragments. Here, we derive and implement analytic derivative couplings for this model, including nuclear derivatives of the natural transition orbital and symmetric orthogonalization transformations that are part of the approximation. Nuclear derivatives of the exciton Hamiltonian's matrix elements, required in order to compute the nonadiabatic couplings, are equivalent to the "Holstein" and "Peierls" exciton/phonon couplings that are widely discussed in the context of model Hamiltonians for energy and charge transport in organic photovoltaics. As an example, we compute the couplings that modulate triplet exciton transport in crystalline tetracene, which is relevant in the context of carrier diffusion following singlet exciton fission.

  19. Analytic derivative couplings and first-principles exciton/phonon coupling constants for an ab initio Frenkel-Davydov exciton model: Theory, implementation, and application to compute triplet exciton mobility parameters for crystalline tetracene.

    Science.gov (United States)

    Morrison, Adrian F; Herbert, John M

    2017-06-14

    Recently, we introduced an ab initio version of the Frenkel-Davydov exciton model for computing excited-state properties of molecular crystals and aggregates. Within this model, supersystem excited states are approximated as linear combinations of excitations localized on molecular sites, and the electronic Hamiltonian is constructed and diagonalized in a direct-product basis of non-orthogonal configuration state functions computed for isolated fragments. Here, we derive and implement analytic derivative couplings for this model, including nuclear derivatives of the natural transition orbital and symmetric orthogonalization transformations that are part of the approximation. Nuclear derivatives of the exciton Hamiltonian's matrix elements, required in order to compute the nonadiabatic couplings, are equivalent to the "Holstein" and "Peierls" exciton/phonon couplings that are widely discussed in the context of model Hamiltonians for energy and charge transport in organic photovoltaics. As an example, we compute the couplings that modulate triplet exciton transport in crystalline tetracene, which is relevant in the context of carrier diffusion following singlet exciton fission.

  20. CIS6/413: Design of a Computer-Based Patient Record System (ARISTOPHANES) for Medical Departments: Implementation for Surgery Wards

    OpenAIRE

    Lazakidou, A; Braun, J.; Tolxdorff, T

    1999-01-01

    Introduction Today, the demand for computer-based patient records to improve the quality of patient care and to reduce costs in health services is generally recognized. The new electronic patient record system (ARISTOPHANES) is based on a self-developed relational data model with a star-shaped topology. An hierarchical structure has been chosen for the user interface. ARISTOPHANES has been developed for use in clinical workstations taking into account varied requirements, user acceptance, and...

  1. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  2. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  3. Sepsis reconsidered: Identifying novel metrics for behavioral landscape characterization with a high-performance computing implementation of an agent-based model.

    Science.gov (United States)

    Cockrell, Chase; An, Gary

    2017-10-07

    Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we

  4. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. EDMS implementation challenge.

    Science.gov (United States)

    De La Torre, Marta

    2002-08-01

    The challenges faced by facilities wishing to implement an electronic medical record system are complex and overwhelming. Issues such as customer acceptance, basic computer skills, and a thorough understanding of how the new system will impact work processes must be considered and acted upon. Acceptance and active support are necessary from Senior Administration and key departments to enable this project to achieve measurable success. This article details one hospital's "journey" through design and successful implementation of an electronic medical record system.

  6. A gauge invariant multiscale approach to magnetic spectroscopies in condensed phase: general three-layer model, computational implementation and pilot applications.

    Science.gov (United States)

    Lipparini, Filippo; Cappelli, Chiara; Barone, Vincenzo

    2013-06-21

    Analytical equations to calculate second order electric and magnetic properties of a molecular system embedded into a polarizable environment are presented. The treatment is limited to molecules described at the self consistent field level of theory, including Hartree-Fock theory as well as Kohn-Sham density functional theory and is extended to the Gauge-Including Atomic Orbital method. The polarizable embedding is described by means of our already implemented polarizable quantum mechanical/molecular mechanical (MM) methodology, where the polarization in the MM layer is handled by means of the fluctuating charge (FQ) model. A further layer of description, i.e, the polarizable continuum model, can also be included. The FQ(/polarizable continuum model) contributions to the properties are derived, with reference to the calculation of the magnetic susceptibility, the nuclear magnetic resonance shielding tensor, electron spin resonance g-tensors, and hyperfine couplings.

  7. A gauge invariant multiscale approach to magnetic spectroscopies in condensed phase: General three-layer model, computational implementation and pilot applications

    Science.gov (United States)

    Lipparini, Filippo; Cappelli, Chiara; Barone, Vincenzo

    2013-06-01

    Analytical equations to calculate second order electric and magnetic properties of a molecular system embedded into a polarizable environment are presented. The treatment is limited to molecules described at the self consistent field level of theory, including Hartree-Fock theory as well as Kohn-Sham density functional theory and is extended to the Gauge-Including Atomic Orbital method. The polarizable embedding is described by means of our already implemented polarizable quantum mechanical/molecular mechanical (MM) methodology, where the polarization in the MM layer is handled by means of the fluctuating charge (FQ) model. A further layer of description, i.e, the polarizable continuum model, can also be included. The FQ(/polarizable continuum model) contributions to the properties are derived, with reference to the calculation of the magnetic susceptibility, the nuclear magnetic resonance shielding tensor, electron spin resonance g-tensors, and hyperfine couplings.

  8. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    Science.gov (United States)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  9. Sandia National Laboratories Advanced Simulation and Computing (ASC) : appraisal method for the implementation of the ASC software quality engineering practices: Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Turgeon, Jennifer; Minana, Molly A.

    2008-02-01

    This document provides a guide to the process of conducting software appraisals under the Sandia National Laboratories (SNL) ASC Program. The goal of this document is to describe a common methodology for planning, conducting, and reporting results of software appraisals thereby enabling: development of an objective baseline on implementation of the software quality engineering (SQE) practices identified in the ASC Software Quality Plan across the ASC Program; feedback from project teams on SQE opportunities for improvement; identification of strengths and opportunities for improvement for individual project teams; guidance to the ASC Program on the focus of future SQE activities Document contents include process descriptions, templates to promote consistent conduct of appraisals, and an explanation of the relationship of this procedure to the SNL ASC software program.

  10. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  11. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  12. Improving Radiation Awareness and Feeling of Personal Security of Non-Radiological Medical Staff by Implementing a Traffic Light System in Computed Tomography.

    Science.gov (United States)

    Heilmaier, C; Mayor, A; Zuber, N; Fodor, P; Weishaupt, D

    2016-03-01

    Non-radiological medical professionals often need to remain in the scanning room during computed tomography (CT) examinations to supervise patients in critical condition. Independent of protective devices, their position significantly influences the radiation dose they receive. The purpose of this study was to assess if a traffic light system indicating areas of different radiation exposure improves non-radiological medical staff's radiation awareness and feeling of personal security. Phantom measurements were performed to define areas of different dose rates and colored stickers were applied on the floor according to a traffic light system: green = lowest, orange = intermediate, and red = highest possible radiation exposure. Non-radiological medical professionals with different years of working experience evaluated the system using a structured questionnaire. Kruskal-Wallis and Spearman's correlation test were applied for statistical analysis. Fifty-six subjects (30 physicians, 26 nursing staff) took part in this prospective study. Overall rating of the system was very good, and almost all professionals tried to stand in the green stickers during the scan. The system significantly increased radiation awareness and feeling of personal protection particularly in staff with ≤ 5 years of working experience (p radiation protection was poor in all groups, especially among entry-level employees (p radiation exposure is much appreciated. It increases radiation awareness, improves the sense of personal radiation protection, and may support endeavors to lower occupational radiation exposure, although the best radiation protection always is to re-main outside the CT room during the scan. • A traffic light system indicating areas with different radiation exposure within the computed tomography scanner room is much appreciated by non-radiological medical staff. • The traffic light system increases non-radiological medical staff's radiation awareness and feeling of

  13. Low-Power and Optimized VLSI Implementation of Compact Recursive Discrete Fourier Transform (RDFT Processor for the Computations of DFT and Inverse Modified Cosine Transform (IMDCT in a Digital Radio Mondiale (DRM and DRM+ Receiver

    Directory of Open Access Journals (Sweden)

    Sheau-Fang Lei

    2013-05-01

    Full Text Available This paper presents a compact structure of recursive discrete Fourier transform (RDFT with prime factor (PF and common factor (CF algorithms to calculate variable-length DFT coefficients. Low-power optimizations in VLSI implementation are applied to the proposed RDFT design. In the algorithm, for 256-point DFT computation, the results show that the proposed method greatly reduces the number of multiplications/additions/computational cycles by 97.40/94.31/46.50% compared to a recent approach. In chip realization, the core size and chip size are, respectively, 0.84 × 0.84 and 1.38 × 1.38 mm2. The power consumption for the 288- and 256-point DFT computations are, respectively, 10.2 (or 0.1051 and 11.5 (or 0.1176 mW at 25 (or 0.273 MHz simulated by NanoSim. It would be more efficient and more suitable than previous works for DRM and DRM+ applications.

  14. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  15. Pilot implementation

    DEFF Research Database (Denmark)

    Hertzum, Morten; Bansler, Jørgen P.; Havn, Erling C.

    2012-01-01

    implementation, and provide three empirical illustrations of our model. We conclude that pilot implementation has much merit as an ISD technique when system performance is contingent on context. But we also warn developers that, despite their seductive conceptual simplicity, pilot implementations can...

  16. Pilot implementation

    DEFF Research Database (Denmark)

    Hertzum, Morten; Bansler, Jørgen P.; Havn, Erling C.

    2012-01-01

    implementation and provide three empirical illustrations of our model. We conclude that pilot implementation has much merit as an ISD technique when system performance is contingent on context. But we also warn developers that, despite their seductive conceptual simplicity, pilot implementations can be difficult...

  17. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  18. Pilot Implementations

    DEFF Research Database (Denmark)

    Manikas, Maria Ie

    This PhD dissertation engages in the study of pilot (system) implementation. In the field of information systems, pilot implementations are commissioned as a way to learn from real use of a pilot system with real data, by real users during an information systems development (ISD) project and before...... objective. The prevalent understanding is that pilot implementations are an ISD technique that extends prototyping from the lab and into test during real use. Another perception is that pilot implementations are a project multiple of co-existing enactments of the pilot implementation. From this perspective...

  19. Improving radiation awareness and feeling of personal security of non-radiological medical staff by implementing a traffic light system in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Heilmaier, C.; Mayor, A.; Zuber, N.; Weishaupt, D. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Radiology; Fodor, P. [Stadtspital Triemli, Zurich (Switzerland). Dept. of Anesthesiology and Intensive Care Medicine

    2016-03-15

    Non-radiological medical professionals often need to remain in the scanning room during computed tomography (CT) examinations to supervise patients in critical condition. Independent of protective devices, their position significantly influences the radiation dose they receive. The purpose of this study was to assess if a traffic light system indicating areas of different radiation exposure improves non-radiological medical staff's radiation awareness and feeling of personal security. Phantom measurements were performed to define areas of different dose rates and colored stickers were applied on the floor according to a traffic light system: green = lowest, orange = intermediate, and red = highest possible radiation exposure. Non-radiological medical professionals with different years of working experience evaluated the system using a structured questionnaire. Kruskal-Wallis and Spearman's correlation test were applied for statistical analysis. Fifty-six subjects (30 physicians, 26 nursing staff) took part in this prospective study. Overall rating of the system was very good, and almost all professionals tried to stand in the green stickers during the scan. The system significantly increased radiation awareness and feeling of personal protection particularly in staff with ? 5 years of working experience (p < 0.05). The majority of non-radiological medical professionals stated that staying in the green stickers and patient care would be compatible. Knowledge of radiation protection was poor in all groups, especially among entry-level employees (p < 0.05). A traffic light system in the CT scanning room indicating areas with lowest, in-termediate, and highest possible radiation exposure is much appreciated. It increases radiation awareness, improves the sense of personal radiation protection, and may support endeavors to lower occupational radiation exposure, although the best radiation protection always is to re-main outside the CT room during the scan.

  20. Outcomes of the implementation of the computer-assisted HBView system for the prevention of hepatitis B virus reactivation in chemotherapy patients: a retrospective analysis.

    Science.gov (United States)

    Sanagawa, Akimasa; Kuroda, Junko; Shiota, Arufumi; Kito, Noriko; Takemoto, Masashi; Kawade, Yoshihiro; Esaki, Tetsuo; Kimura, Kazunori

    2015-01-01

    Screening for hepatitis B virus (HBV) infection is recommended worldwide for patients receiving systemic chemotherapy in accordance with clinical guidelines, but compliance varies by country and facility. Alert systems may be useful for promoting screening, but it is unclear how effective such systems are. In this study, we investigated HBV screening procedures and their incorporation into treatment regimens following the implementation of an alert system. An alert system was introduced at our hospital in April 2012. The rates of HBV screening in the periods before and after the introduction of the alert system (September 2010 to March 2012 and April 2012 to October 2013, respectively) were investigated. We collected data on hepatitis B surface antigen (HBsAg), hepatitis B surface antibody (HBsAb), hepatitis B core antibody (HBcAb), and HBV-DNA testing in patients. As a result of this analysis, we developed a system in which pharmacists would intervene to check and confirm whether HBV screening had occurred in patients scheduled to begin treatment with chemotherapy. We named our project the "HBView" project, and the rate of HBV screening and the number of times pharmacists intervened was studied during specific time periods before and after the HBView project commenced (July 2013 to December 2013 and January 2014 to June 2014, respectively). After introducing the alert system, the percentage of patients tested for HBsAb/HBcAb and HBV-DNA increased significantly, from 71.6 % to 84.9 % and from 44.5 % to 69.7 %, respectively. However, the rate of compliance with HBV testing guidelines was not 100 % after interventions. The numbers of patients who were not screened but should have been before and after the introduction of HBView were 6 and 17, respectively. Two patients at risk of HBV reactivation were identified after intervention by pharmacists; their intervention thus prevented HBV reactivation. Compliance with clinical HBV screening guidelines was not

  1. Vectorization, parallelization and implementation of nuclear codes =MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3= on the VPP500 computer system. Progress report 1995 fiscal year

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo [Fujitsu Ltd., Tokyo (Japan); Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru

    1996-06-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  2. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  3. Quantum computer science

    CERN Document Server

    Lanzagorta, Marco

    2009-01-01

    In this text we present a technical overview of the emerging field of quantum computation along with new research results by the authors. What distinguishes our presentation from that of others is our focus on the relationship between quantum computation and computer science. Specifically, our emphasis is on the computational model of quantum computing rather than on the engineering issues associated with its physical implementation. We adopt this approach for the same reason that a book on computer programming doesn't cover the theory and physical realization of semiconductors. Another distin

  4. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  5. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  6. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  7. Numerical Implementation and Computer Simulation of Tracer ...

    African Journals Online (AJOL)

    , was most dependent on the source definition and the hydraulic conductivity K of the porous medium. The 12000mg/l chloride tracer source was almost completely dispersed within 34 hours. Keywords: Replication, Numerical simulation, ...

  8. Can patients with osteoporosis, who should benefit from implementation of the national service framework for older people, be identified from general practice computer records? A pilot study that illustrates the variability of computerized medical records and problems with searching them.

    Science.gov (United States)

    de Lusignan, S; Chan, T; Wells, S; Cooper, A; Harvey, M; Brew, S; Wright, M

    2003-11-01

    Although UK general practice is highly computerized, comprehensive use of these computers is often limited to registration data and the issue of repeat prescriptions. The recording of diagnostic data is patchy. This study examines whether patients with, or at risk of, osteoporosis can be readily identified from general practice computer records. It reports the findings of a pilot study designed to show the variability of recording the diagnosis of osteoporosis and osteopenia, as well as how useful surrogate markers might be to identify these patients. The study also illustrates the difficulties that even skilled practitioners in a primary care research network experience in extracting clinical data from practice information systems. Computer searches were carried out across six practices in a general practice research network in the south-east of England. Two of these practices had previously undertaken research projects in osteoporosis and were consequently expected to have excellent data quality in osteoporosis. These two practices had a combined list size of 27,500 and the remaining practices had a combined practice population of 43,000 patients. The data were found to be variable with over 10-fold differences between practices in the recorded prevalence of osteoporosis diagnosis as well as its surrogate markers-such as fragility fractures, long-term steroid prescription, etc. There was no difference in data quality between the two practices that had conducted osteoporosis research and the rest of the group, other than in the areas of diagnostic recording and prescribing for osteoporosis and recording of fractures. Issues were raised by the practices that struggled to identify patients at risk of osteoporosis about the limitations of Read classification in this disease area. Practices need further assistance if the patients at risk are to be identified. Without urgent action, it will be difficult for practices to identify the patients who are likely to benefit

  9. Computers in writing instruction

    NARCIS (Netherlands)

    Schwartz, Helen J.; van der Geest, Thea; Smit-Kreuzen, Marlies

    1992-01-01

    For computers to be useful in writing instruction, innovations should be valuable for students and feasible for teachers to implement. Research findings yield contradictory results in measuring the effects of different uses of computers in writing, in part because of the methodological complexity of

  10. Fama Architecture: Implementation Details

    Science.gov (United States)

    Alcolea, A.; Martinez, A.; Laguna, P.; Navarro, J.; Pollan, T.; Vicente, S. J.; Roy, A.

    1987-10-01

    The FAMA (Fine granularity Advanced Multiprocessor Architecture), currently being developed in the Department of Electrical Engineering and Computer Science of the University of Zaragoza, is an SIMD array architecture optimized for computer-vision applications. Because of its high cost-effectiveness, it is a very interesting alternative for industrial systems. Papers describing the processor element of FAMA have been submitted to several conferences; this paper focuses on the rest of components that complete the architecture: controller, I/O interface and software. The controller generates instructions at a 10MHz rate, allowing efficient access to bidimensional data structures. The I/O interface is capable of reordering information for efficient I/O operations. Development tools and modules for classical computer-vision tasks are being worked on in a first stage, the implementation of models based on existing theories on human vision will follow.

  11. A Comparison of Ellipse-Fitting Techniques for Two and Three-Dimensional Strain Analysis, and Their Implementation in an Integrated Computer Program Designed for Field-Based Studies

    Science.gov (United States)

    Vollmer, F. W.

    2010-12-01

    A new computer program, EllipseFit 2, was developed to implement computational and graphical techniques for two and three-dimensional geological finite strain analysis. The program includes an integrated set of routines to derive three-dimensional strain from oriented digital photographs, with a graphical interface suitable for field-based structural studies. The intuitive interface and multi-platform deployment make it useful for structural geology teaching laboratories as well (the program is free). Images of oriented sections are digitized using center-point, five-point ellipse, or n-point polygon moment-equivalent ellipse fitting. The latter allows strain calculation from irregular polygons with sub-pixel accuracy (Steger, 1996; Mulchrone and Choudhury, 2004). Graphical strain ellipse techniques include center-to-center methods (Fry, 1979; Erslev, 1988; Erslev and Ge, 1990), with manual and automatic n-point ellipse-fitting. Graphical displays include axial length graphs, Rf/Φ graphs (Dunnet, 1969), logarithmic and hyperbolic polar graphs (Elliott, 1970; Wheeler, 1984) with automatic contouring, and strain maps. Best-fit ellipse calculations include harmonic and circular means, and eigenvalue (Shimamoto and Ikeda, 1976) and mean radial length (Mulchrone et al., 2003) shape-matrix calculations. Shape-matrix error analysis is done analytically (Mulchrone, 2005) and using bootstrap techniques (Efron, 1979). The initial data set can be unstrained to check variation in the calculated pre-strain fabric. Fitting of ellipse-section data to a best-fit ellipsoid (b*) is done using the shape-matrix technique of Shan (2008). Error analysis is done by calculating the section ellipses of b*, and comparing the misfits between calculated and observed section ellipses. Graphical displays of ellipsoid data include axial-ratio (Flinn, 1962) and octahedral strain magnitude (Hossack, 1968) graphs. Calculations were done to test and compare computational techniques. For two

  12. Computer Security Assistance Program

    Science.gov (United States)

    1997-09-01

    Information COMPUTER SECURITY ASSISTANCE PROGRAM OPR: HQ AFCA/SYS (CMSgt Hogan) Certified by: HQ USAF/SCXX (Lt Col Francis X. McGovern) Pages: 5...Distribution: F This instruction implements Air Force Policy Directive (AFPD) 33-2, Information Protection, establishes the Air Force Computer Security Assistance...Force single point of contact for reporting and handling computer security incidents and vulnerabilities including AFCERT advisories and Defense

  13. Research in computer science

    Science.gov (United States)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  14. Undergraduate computational physics projects on quantum computing

    Science.gov (United States)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  15. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  16. Implementing TQM.

    Science.gov (United States)

    Bull, G; Maffetone, M A; Miller, S K

    1992-01-01

    Total quality management (TQM) is an organized, systematic approach to problem solving and continuous improvement. American corporations have found that TQM is an excellent way to improve competitiveness, lower operating costs, and improve productivity. Increasing numbers of laboratories are investigating the benefits of TQM. For this month's column, we asked our respondents: What steps has your laboratory taken to implement TQM?

  17. Implementation Politics

    DEFF Research Database (Denmark)

    Hegland, Troels Jacob; Raakjær, Jesper

    2008-01-01

    level are supplemented or even replaced by national priorities. The chapter concludes that in order to capture the domestic politics associated with CFP implementation in Denmark, it is important to understand the policy process as a synergistic interaction between dominant interests, policy alliances...

  18. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  19. Enzyme Computation - Computing the Way Proteins Do

    Directory of Open Access Journals (Sweden)

    Jaime-Alberto Parra-Plaza

    2013-08-01

    Full Text Available It is presented enzyme computation, a computational paradigm based on the molecular activity inside the biological cells, particularly in the capacity of proteins to represent information, of enzymes to transform that information, and of genes to produce both elements according to the dynamic requirements of a given system. The paradigm explodes the rich computational possibilities offered by metabolic pathways and genetic regulatory networks and translates those possibilities into a distributed computational space made up of active agents which communicate through the mechanism of message passing. Enzyme computation has been tested in diverse problems, such as image processing, species classification, symbolic regression, and constraints satisfaction. Also, given its distributed nature, an implementation in dynamical reconfigurable hardware has been possible.

  20. GPU implementation of JPEG XR

    Science.gov (United States)

    Che, Ming-Chao; Liang, Jie

    2010-01-01

    JPEG XR (formerly Microsoft Windows Media Photo and HD Photo) is the latest image coding standard. By integrating various advanced technologies such as integer hierarchical lapped transform, context adaptive Huffman coding, and high dynamic range coding, it achieves competitive performance to JPEG-2000, but with lower computational complexity and memory requirement. In this paper, the GPU implementation of the JPEG XR codec using NVIDIA CUDA (Compute Unified Device Architecture) technology is investigated. Design considerations to speed up the algorithm are discussed, by taking full advantage of the properties of the CUDA framework and JPEG XR. Experimental results are presented to demonstrate the performance of the GPU implementation.

  1. Introduction to quantum computers

    CERN Document Server

    Berman, Gennady P; Mainieri, Ronnie; Tsifrinovich, Vladimir I

    1998-01-01

    Quantum computing promises to solve problems which are intractable on digital computers. Highly parallel quantum algorithms can decrease the computational time for some problems by many orders of magnitude. This important book explains how quantum computers can do these amazing things. Several algorithms are illustrated: the discrete Fourier transform, Shor’s algorithm for prime factorization; algorithms for quantum logic gates; physical implementations of quantum logic gates in ion traps and in spin chains; the simplest schemes for quantum error correction; correction of errors caused by im

  2. Initial findings from a mixed-methods evaluation of computer-assisted therapy for substance misuse in prisoners: Development, implementation and clinical outcomes from the ‘Breaking Free Health & Justice’ treatment and recovery programme

    Directory of Open Access Journals (Sweden)

    Sarah Elison

    2015-08-01

    Full Text Available Background: Within the United Kingdom’s ‘Transforming Rehabilitation’ agenda, reshaping drug and alcohol interventions in prisons is central to the Government’s approach to addressing substance dependence in the prison population and reduce reoffending. To achieve this, a through-care project to support offenders following release, ‘Gateways’, is taking place providing ‘through the gate’ support to released offenders, including help with organising accommodation, education and employment, and access to a peer supporter. In addition, Gateways is providing access to an evidence-based computer-assisted therapy (CAT programme for substance misuse, Breaking Free Health & Justice (BFHJ. Developed in partnership with the Ministry of Justice (MoJ National Offender Management Services (NOMS, and based on a community version of the programme, Breaking Free Online (BFO, BFHJ provides access to clinically-robust techniques based on cognitive behavioural therapy (CBT and promotes the role of technology-enhanced approaches in recovery from substance misuse. The BFHJ programme is provided via ‘Virtual Campus’ (VC, a secure, web-based learning environment delivered by NOMS and the Department for Business, Innovation and Skills, which has no links to websites not approved by MoJ, and provides prisoners with access to online training courses around work and skills. Providing BFHJ on VC makes the programme the world’s first online healthcare programme to be provided in prisons. Aims: Although here is an emerging evidence-base for the effectiveness of the community version of the BFO programme and its implementation within community treatment settings (Davies, Elison, Ward, & Laudet, 2015; Elison, Davies, & Ward, 2015a, 2015b; Elison, Humphreys, Ward, & Davies, 2013; Elison, Ward, Davies, Lidbetter, et al., 2014; Elison, Ward, Davies, & Moody, 2014, its potential within prison settings requires exploration. This study therefore sought to

  3. Spatial grammar implementation

    DEFF Research Database (Denmark)

    McKay, Alison; Chase, Scott Curland; Shea, Kristina

    2012-01-01

    fluid, demands conceptual design tools that support designers’ ways of thinking and working, and enhance creativity, for example, by offering design alternatives, difficult or not, possible without the use of such tools. The potential of spatial grammars as a technology to support such design tools has...... been demonstrated through experimental research prototypes since the 1970s. In this paper, we provide a review of recent spatial grammar implementations, which were presented in the Design Computing and Cognition 2010 workshop on which this paper is based, in the light of requirements for conceptual...

  4. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  5. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level....

  6. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  7. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Implementation of computational model for the evaluation of electromagnetic susceptibility of the cables for communication and control of high voltage substations; Implementacao de modelo computacional para a avaliacao da suscetibilidade eletromagnetica dos cabos de comunicacao e controle de subestacoes de alta tensao

    Energy Technology Data Exchange (ETDEWEB)

    Sartin, Antonio C.P. [Companhia de Transmissao de Energia Eletrica Paulista (CTEEP), Bauru, SP (Brazil); Dotto, Fabio R.L.; Sant' Anna, Cezar J.; Thomazella, Rogerio [Fundacao para o Desenvolvimento de Bauru, SP (Brazil); Ulson, Jose A.C.; Aguiar, Paulo R. de [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Bauru, SP (Brazil)

    2009-07-01

    This work show the implementation of a electromagnetic model for supervision cable, protection, communication and high voltage substations control that was investigated in literature and adapted. The model was implemented by using a computational tool in order to obtain the electromagnetic behavior of various cables used in CTEEP substation, subject to several sources of electromagnetic interference in this inhospitable environment, such as lightning strikes, outbreaks of maneuvers switching and the corona effect. The results obtained in computer simulations were compared with results of laboratory tests carried out on a lot of cables that represent those systems that are present in substations 440 kV. This study characterized the electromagnetic interference, ranked them, identified possible susceptible points in the substation, which contributed to the development of a technical procedure that minimizes unwanted effects caused in communication systems and substation control. This developed procedure also assured the maximum reliability and availability in the operation of the electrical power system to the company.

  9. Computers and Computer Cultures.

    Science.gov (United States)

    Papert, Seymour

    1981-01-01

    Instruction using computers is viewed as different from most other approaches to education, by allowing more than right or wrong answers, by providing models for systematic procedures, by shifting the boundary between formal and concrete processes, and by influencing the development of thinking in many new ways. (MP)

  10. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  11. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... group of computers connected to the Internet in a cloud-like boundary (Box 1)). In essence computing is transitioning from an era of users owning computers to one in which users do not own computers but have access to computing hardware and software maintained by providers. Users access the ...

  12. A Hardware Filesystem Implementation with Multidisk Support

    National Research Council Canada - National Science Library

    Mendon, Ashwin A; Schmidt, Andrew G; Sass, Ron

    2009-01-01

    .... This article describes one such innovation: a filesystem implemented in hardware. This has the potential of improving the performance of data-intensive applications by connecting secondary storage directly to FPGA compute accelerators...

  13. Convergence: Computing and communications

    Energy Technology Data Exchange (ETDEWEB)

    Catlett, C. [National Center for Supercomputing Applications, Champaign, IL (United States)

    1996-12-31

    This paper highlights the operations of the National Center for Supercomputing Applications (NCSA). NCSA is developing and implementing a national strategy to create, use, and transfer advanced computing and communication tools and information technologies for science, engineering, education, and business. The primary focus of the presentation is historical and expected growth in the computing capacity, personal computer performance, and Internet and WorldWide Web sites. Data are presented to show changes over the past 10 to 20 years in these areas. 5 figs., 4 tabs.

  14. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 ... Keywords. Boolean logic; computation; computational complexity; digital language; Hilbert space; qubit; superposition; Feynman.

  15. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  16. Cooperative Caching Framework for Mobile Cloud Computing

    OpenAIRE

    Joy, Preetha Theresa; Jacob, K. Poulose

    2013-01-01

    Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data cach...

  17. Layered Architecture for Quantum Computing

    National Research Council Canada - National Science Library

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G; McMahon, Peter L; Kim, Jungsang; Ladd, Thaddeus D; Yamamoto, Yoshihisa

    2012-01-01

    .... We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction...

  18. Fostering Computational Thinking

    CERN Document Server

    Caballero, Marcos D; Schatz, Michael F

    2011-01-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term 1357 students in this course solved a suite of fourteen computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated in a proctored environment using a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation and the implications for instruction in computational modeling in introductory STEM courses.

  19. Computational Ocean Acoustics

    CERN Document Server

    Jensen, Finn B; Porter, Michael B; Schmidt, Henrik

    2011-01-01

    Since the mid-1970s, the computer has played an increasingly pivotal role in the field of ocean acoustics. Faster and less expensive than actual ocean experiments, and capable of accommodating the full complexity of the acoustic problem, numerical models are now standard research tools in ocean laboratories. The progress made in computational ocean acoustics over the last thirty years is summed up in this authoritative and innovatively illustrated new text. Written by some of the field's pioneers, all Fellows of the Acoustical Society of America, Computational Ocean Acoustics presents the latest numerical techniques for solving the wave equation in heterogeneous fluid–solid media. The authors discuss various computational schemes in detail, emphasizing the importance of theoretical foundations that lead directly to numerical implementations for real ocean environments. To further clarify the presentation, the fundamental propagation features of the techniques are illustrated in color. Computational Ocean A...

  20. Computing Prosodic Morphology

    CERN Document Server

    Kiraz, G A

    1996-01-01

    This paper establishes a framework under which various aspects of prosodic morphology, such as templatic morphology and infixation, can be handled under two-level theory using an implemented multi-tape two-level model. The paper provides a new computational analysis of root-and-pattern morphology based on prosody.

  1. Parallel Computational Protein Design.

    Science.gov (United States)

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  2. COMPUTER SUPPORT MANAGEMENT PRODUCTION

    Directory of Open Access Journals (Sweden)

    Svetlana Trajković

    2014-10-01

    Full Text Available The modern age in which we live today, modern and highly advanced technology that follows us all, gives great importance in the management of production within the computer support of management. Computer applications in production, the organization of production systems, in the organization of management and business, is gaining in importance. We live in a time when more and more uses computer technology and thus gives the opportunity for a broad and important area of application of computer systems in production, as well as some methods that enable us to successful implementation of a computer, such as in the management of production. Computer technology speeds up the processing and transfer of Information By needed in decision-making at various levels of management. Computer applications in production management and organizational management business production system gets more and more. New generation of computers caused the first technological revolution in industry. On these solutions the industry has been able to use all the modern technology of computers in manufacturing, automation and production management .

  3. Comparison of Orthogonal Matching Pursuit Implementations

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll

    2012-01-01

    We study the numerical and computational performance of three implementations of orthogonal matching pursuit: one using the QR matrix decomposition, one using the Cholesky matrix decomposition, and one using the matrix inversion lemma. We find that none of these implementations suffer from...

  4. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  5. Subspace Detectors: Efficient Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Paik, T

    2006-07-26

    The optimum detector for a known signal in white Gaussian background noise is the matched filter, also known as a correlation detector [Van Trees, 1968]. Correlation detectors offer exquisite sensitivity (high probability of detection at a fixed false alarm rate), but require perfect knowledge of the signal. The sensitivity of correlation detectors is increased by the availability of multichannel data, something common in seismic applications due to the prevalence of three-component stations and arrays. When the signal is imperfectly known, an extension of the correlation detector, the subspace detector, may be able to capture much of the performance of a matched filter [Harris, 2006]. In order to apply a subspace detector, the signal to be detected must be known to lie in a signal subspace of dimension d {ge} 1, which is defined by a set of d linearly-independent basis waveforms. The basis is constructed to span the range of signals anticipated to be emitted by a source of interest. Correlation detectors operate by computing a running correlation coefficient between a template waveform (the signal to be detected) and the data from a window sliding continuously along a data stream. The template waveform and the continuous data stream may be multichannel, as would be true for a three-component seismic station or an array. In such cases, the appropriate correlation operation computes the individual correlations channel-for-channel and sums the result (Figure 1). Both the waveform matching that occurs when a target signal is present and the cross-channel stacking provide processing gain. For a three-component station processing gain occurs from matching the time-history of the signals and their polarization structure. The projection operation that is at the heart of the subspace detector can be expensive to compute if implemented in a straightforward manner, i.e. with direct-form convolutions. The purpose of this report is to indicate how the projection can be

  6. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  7. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  8. Quantum computing

    OpenAIRE

    Traub, Joseph F.

    2014-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  9. RELAP4/MOD5: a computer program for transient thermal-hydraulic analysis of nuclear reactors and related systems. User's manual. Volume II. Program implementation. [PWR and BWR

    Energy Technology Data Exchange (ETDEWEB)

    None

    1976-09-01

    This portion of the RELAP4/MOD5 User's Manual presents the details of setting up and entering the reactor model to be evaluated. The input card format and arrangement is presented in depth, including not only cards for data but also those for editing and restarting. Problem initalization including pressure distribution and energy balance is discussed. A section entitled ''User Guidelines'' is included to provide modeling recommendations, analysis and verification techniques, and computational difficulty resolution. The section is concluded with a discussion of the computer output form and format.

  10. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  11. Computer Timetabling and Curriculum Planning.

    Science.gov (United States)

    Zarraga, M. N.; Bates, S.

    1980-01-01

    A Manchester, England, high school designed lower school curriculum structures via computer and investigated their feasibility using the Nor Data School Scheduling System. The positive results suggest that the computer system could provide all schools with an invaluable aid to the planning and implementation of their curriculum. (CT)

  12. Neuromorphic Computing for Cognitive Cybersecurity

    Science.gov (United States)

    2017-03-20

    neuromorphic computing using threshold gate networks. Keywords: Neuromorphic computing; Neural networks; cybersecurity Introduction Traditional CMOS scaling...for neural nets is a combined multiply-accumulate operation, evaluated by using a threshold . Neurons of this type implement threshold gate...there is much work to be done in the following areas: • One shot learning, unsupervised learning, concept drift; • Data reduction techniques

  13. Experimental Aspects of Quantum Computing

    CERN Document Server

    Everitt, Henry O

    2005-01-01

    Practical quantum computing still seems more than a decade away, and researchers have not even identified what the best physical implementation of a quantum bit will be. There is a real need in the scientific literature for a dialog on the topic of lessons learned and looming roadblocks. These papers, which appeared in the journal of "Quantum Information Processing" are dedicated to the experimental aspects of quantum computing These papers highlight the lessons learned over the last ten years, outline the challenges over the next ten years, and discuss the most promising physical implementations of quantum computing.

  14. Interfacing the Paramesh Computational Libraries to the Cactus Computational Framework Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and implement an interface between the Paramesh computational libraries, developed and used by groups at NASA GSFC, and the Cactus computational...

  15. Computer Assisted Instruction Authoring System. Final Report. Appendix A: Photographs of Some Screen Displays. Experimental Authoring System (XAS). Appendix B: Author Manual for the DERS Interactive Experimental Authoring System (XAS). Appendix C: Syntax of Implementation Two. Computer Assisted Instruction Authoring System.

    Science.gov (United States)

    Hunka, S.

    This project report details the design of an interactive authoring system for the development of computer assisted instructional (CAI) software. This system is possible because the development of more powerful computing and software systems has facilitated authoring systems which allow the development of courseware in an interactive mode and…

  16. Computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Siebert, B.R.L.; Thomas, R.H.

    1996-01-01

    The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.

  17. Quantum computing

    OpenAIRE

    Li, Shu-shen; Long, Gui-Lu; Bai, Feng-Shan; Feng, Song-Lin; Zheng, Hou-Zhi

    2001-01-01

    Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.

  18. Computer implemented land cover classification using LANDSAT MSS digital data: A cooperative research project between the National Park Service and NASA. 3: Vegetation and other land cover analysis of Shenandoah National Park

    Science.gov (United States)

    Cibula, W. G.

    1981-01-01

    Four LANDSAT frames, each corresponding to one of the four seasons were spectrally classified and processed using NASA-developed computer programs. One data set was selected or two or more data sets were marged to improve surface cover classifications. Selected areas representing each spectral class were chosen and transferred to USGS 1:62,500 topographic maps for field use. Ground truth data were gathered to verify the accuracy of the classifications. Acreages were computed for each of the land cover types. The application of elevational data to seasonal LANDSAT frames resulted in the separation of high elevation meadows (both with and without recently emergent perennial vegetation) as well as areas in oak forests which have an evergreen understory as opposed to other areas which do not.

  19. Roadmap for Peridynamic Software Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The application of peridynamics for engineering analysis requires an efficient and robust software implementation. Key elements include processing of the discretization, the proximity search for identification of pairwise interactions, evaluation of the con- stitutive model, application of a bond-damage law, and contact modeling. Additional requirements may arise from the choice of time integration scheme, for example esti- mation of the maximum stable time step for explicit schemes, and construction of the tangent stiffness matrix for many implicit approaches. This report summaries progress to date on the software implementation of the peridynamic theory of solid mechanics. Discussion is focused on parallel implementation of the meshfree discretization scheme of Silling and Askari [33] in three dimensions, although much of the discussion applies to computational peridynamics in general.

  20. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  1. Cognitive Computing

    OpenAIRE

    2015-01-01

    "Cognitive Computing" has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and th...

  2. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  3. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  4. Computing fundamentals introduction to computers

    CERN Document Server

    Wempen, Faithe

    2014-01-01

    The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common

  5. Computational Complexity

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2017-02-01

    Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...

  6. Optical Computing

    Indian Academy of Sciences (India)

    tal computers are still some years away, however a number of devices that can ultimately lead to real optical computers have already been manufactured, including optical logic gates, optical switches, optical interconnections, and opti- cal memory. The most likely near-term optical computer will really be a hybrid composed ...

  7. Quantum Computing

    Indian Academy of Sciences (India)

    In the early 1980s Richard Feynman noted that quan- tum systems cannot be efficiently simulated on a clas- sical computer. Till then the accepted view was that any reasonable !{lodel of computation can be efficiently simulated on a classical computer. Hence, this observa- tion led to a lot of rethinking about the basic ...

  8. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  9. Cloud Computing

    Indian Academy of Sciences (India)

    Cloud computing; services on a cloud; cloud types; computing utility; risks in using cloud computing. Author Affiliations. V Rajaraman1. Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore 560 012, India. Resonance – Journal of Science Education. Current Issue : Vol. 22, Issue 11. Current ...

  10. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  11. Diamond turning machine controller implementation

    Energy Technology Data Exchange (ETDEWEB)

    Garrard, K.P.; Taylor, L.W.; Knight, B.F.; Fornaro, R.J.

    1988-12-01

    The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, the control computer hardware and software, are discussed in detail below.

  12. Computer science. Informatica

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    In any company, an information system is of fundamental importance in making decisions, monitoring and implementation. The planned, coherent use of computers can play a key role. Technological developments have resulted in reductions in data processing costs; automatic data processing is now being introduced into more areas of work. The expansion of technologies such as communication services and networks, VDUs, personal micro-computers and graphic equipment together with the development of very high level languages has led to the proliferation of user-friendly information systems involving interactive interfaces based on graphic data. HUNOSA has interests in these areas.

  13. Computer Mediated Communication

    Science.gov (United States)

    Fano, Robert M.

    1984-08-01

    The use of computers in organizations is discussed in terms of its present and potential role in facilitating and mediating communication between people. This approach clarifies the impact that computers may have on the operation of organizations and on the individuals comprising them. Communication, which is essential to collaborative activities, must be properly controlled to protect individual and group privacy, which is equally essential. Our understanding of the human and organizational aspects of controlling communication and access to information presently lags behind our technical ability to implement the controls that may be needed.

  14. The computer graphics metafile

    CERN Document Server

    Henderson, LR; Shepherd, B; Arnold, D B

    1990-01-01

    The Computer Graphics Metafile deals with the Computer Graphics Metafile (CGM) standard and covers topics ranging from the structure and contents of a metafile to CGM functionality, metafile elements, and real-world applications of CGM. Binary Encoding, Character Encoding, application profiles, and implementations are also discussed. This book is comprised of 18 chapters divided into five sections and begins with an overview of the CGM standard and how it can meet some of the requirements for storage of graphical data within a graphics system or application environment. The reader is then intr

  15. Actor Model of Computation for Scalable Robust Information Systems : One computer is no computer in IoT

    OpenAIRE

    Hewitt, Carl

    2015-01-01

    International audience; The Actor Model is a mathematical theory that treats “Actors” as the universal conceptual primitives of digital computation. Hypothesis: All physically possible computation can be directly implemented using Actors.The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. The advent of massive concurrency through client-cloud computing and many-cor...

  16. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  17. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  18. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  19. Advances in photonic reservoir computing

    Science.gov (United States)

    Van der Sande, Guy; Brunner, Daniel; Soriano, Miguel C.

    2017-05-01

    We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir's complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  20. Advances in photonic reservoir computing

    Directory of Open Access Journals (Sweden)

    Van der Sande Guy

    2017-05-01

    Full Text Available We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

  1. 心算教學活動實踐於小一數學課室之研究 Mental Computation Activity Implementation into First-Grade Mathematics Classes

    Directory of Open Access Journals (Sweden)

    楊德清 Der-Ching Yang

    2012-06-01

    Full Text Available 本研究採質性研究法探討國小一年級學生進行心算教學活動之成效、策略改變情形及教學活動的實施歷程。研究樣本為實驗組學生21 人,進行二位數加減一位數的心算策略教學。對照組學生16人,依照教科書規劃的方式進行二位數加減一位數的教學,兩班各進行12 節課教學。研究結果顯示:教學後實驗組學生在心算之表現顯著優於對照組學生。同時,結果亦顯示教學後實驗組學生能夠發展多元之解題策略,如分離策略、集合策略,以及整體策略等。相反地,對照組學生在教學前、後,所使用之策略以數數策略、圖像與直式心像為主,改變較少,且少有心算策略的使用。教學過程中,藉由高分組學生的回答帶動中分組與低分組學生的思考,同時學生對於能夠上臺分享自己的策略感到興奮。最後根據研究結果,針對心算教學活動融入一年級數學課程及未來研究提出建議。 This study employs a qualitative approach to investigate the effect of mental computation activities integrated into first-grade mathematics classes. The mental computation activities, including 2-digit addition and subtraction problems, were used in an experimental group comprising 21 students. The control group comprised 16 students who were following textbook activities, including 2-digit addition and subtraction problems. The teaching intervention lasted for 12 periods for both groups. The results show that students in the experimental group experienced improved performance for mental computation than students in the control group following intervention. Additionally, data indicate that students in the experimental group can develop and use multiple mental strategies, such as separation, aggregation, and holistic strategies following intervention. Conversely, students in the control group preferred using counting and pictorial

  2. Chaotic Neuro-Computer

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    This chapter describes mixed analog/digital circuit implementations of a chaotic neuro-computer system. The chaotic neuron model is implemented with a switched-capacitor (SC) integrated circuit technique. The analog SC circuit can handle real numbers electrically in the sense that the state variables of the analog circuits are continuous. Therefore, chaotic dynamics can be faithfully replicated with the SC chaotic neuron circuit. The synaptic connections, on the other hand, are realized with digital circuits to accommodate a vast number of synapses. We propose a memory-based digital synapse circuit architecture that draws upon the table look-up method to achieve rapid calculation of a large number of weighted summations. The first generation chaotic neuro-computer with 16 SC neurons and 256 synapses is reviewed. Finally, a large-scale system with 10000 neurons and 100002 synapses is described.

  3. Numerical computations with GPUs

    CERN Document Server

    Kindratenko, Volodymyr

    2014-01-01

    This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to

  4. A Call for Computational Thinking in Undergraduate Psychology

    Science.gov (United States)

    Anderson, Nicole D.

    2016-01-01

    Computational thinking is an approach to problem solving that is typically employed by computer programmers. The advantage of this approach is that solutions can be generated through algorithms that can be implemented as computer code. Although computational thinking has historically been a skill that is exclusively taught within computer science,…

  5. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  6. Quantum Computation and Quantum Spin Dynamics

    NARCIS (Netherlands)

    Raedt, Hans De; Michielsen, Kristel; Hams, Anthony; Miyashita, Seiji; Saito, Keiji

    2001-01-01

    We analyze the stability of quantum computations on physically realizable quantum computers by simulating quantum spin models representing quantum computer hardware. Examples of logically identical implementations of the controlled-NOT operation are used to demonstrate that the results of a quantum

  7. CATTS: Computer-Aided Training in Troubleshooting.

    Science.gov (United States)

    Landa, Suzanne

    The Rand Corporation's Programmer-Oriented Graphics Operation (POGO) was used in the design, implementation and testing of a computer-assisted instruction course to train airmen in malfunction diagnosis--CATTS (Computer Aided Training in Troubleshooting). The design of the course attempted to reduce the problems of computer graphics for both…

  8. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  9. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  10. Implementation of the design attendent by computers (CAD) for the location of structures of power transmission lines; Implementacion del diseno asistido por computadora para la localizacion de estructuras de lineas de transmision

    Energy Technology Data Exchange (ETDEWEB)

    Vega Ortiz, Miguel; Gutierrez Arriola, Gustavo [Instituto de Investigaciones Electricas, Temixco, Morelos (Mexico)

    2000-07-01

    In order that the tools of CAD (Design Attended by Computer) that are offered in the market are really useful, they must combine the criteria and experiences of the expert designers with the specifications and practices established in the electrical company. This includes, from the introduction to the information system of the available input data and its design criteria, to obtaining the required output information. In the present work the methodology developed by the Instituto de Investigaciones Electricas (IIE) in the design of power transmission lines that integrates the Comision Federal de Electricidad (CFE) requirements in the design of its transmission power lines is an advanced computer tool that results in obtaining better designs. Some of the most important aspects are the reduction of the used working time, the cost of the designed line, its reliability, the flexibility in the information handling and the quality of presentation. [Spanish] Para que las herramientas de diseno asistido por computadora que se ofrecen en el mercado sean realmente utiles deben conjuntar los criterios y experiencias de los disenadores expertos con las especificaciones y practicas establecidas en la empresa electrica. Esto incluye desde la introduccion al sistema de la informacion de datos de entrada de la que se dispone y de sus criterios de diseno hasta la obtencion de la informacion de salida que se requiere. En el presente trabajo se resume la metodologia desarrollada por el Instituto de Investigaciones Electricas (IIE) en el diseno de lineas de transmision, que integra los requerimientos de la Comision Federal de Electricidad (CFE) en el diseno de sus lineas de transmision en una herramienta de computo avanzada y que redunda en la obtencion de mejores disenos. Algunos de los aspectos mas importantes son la reduccion del tiempo de trabajo empleado, el costo de la linea disenada, su confiabilidad, la flexibilidad en el manejo de informacion y la calidad de presentacion.

  11. Quantum Computing

    Science.gov (United States)

    Steffen, Matthias

    Solving computational problems require resources such as time, memory, and space. In the classical model of computation, computational complexity theory has categorized problems according to how difficult it is to solve them as the problem size increases. Remarkably, a quantum computer could solve certain problems using fundamentally fewer resources compared to a conventional computer, and therefore has garnered significant attention. Yet because of the delicate nature of entangled quantum states, the construction of a quantum computer poses an enormous challenge for experimental and theoretical scientists across multi-disciplinary areas including physics, engineering, materials science, and mathematics. While the field of quantum computing still has a long way to grow before reaching full maturity, state-of-the-art experiments on the order of 10 qubits are beginning to reach a fascinating stage at which they can no longer be emulated using even the fastest supercomputer. This raises the hope that small quantum computer demonstrations could be capable of approximately simulating or solving problems that also have practical applications. In this talk I will review the concepts behind quantum computing, and focus on the status of superconducting qubits which includes steps towards quantum error correction and quantum simulations.

  12. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  13. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  14. Computing multidimensional persistence

    Directory of Open Access Journals (Sweden)

    Gunnar Carlsson

    2010-11-01

    Full Text Available The theory of multidimensional persistence captures the topology of a multifiltration - a multiparameter family of increasing spaces.  Multifiltrations arise naturally in the topological analysis of scientific data.  In this paper, we give a polynomial time algorithm for computing multidimensional persistence.  We recast this computation as a problem within computational commutative algebra and utilize algorithms from this area to solve it.  While the resulting problem is EXPSPACE-complete and the standard algorithms take doubly-exponential time, we exploit the structure inherent withing multifiltrations to yield practical algorithms.  We implement all algorithms in the paper and provide statistical experiments to demonstrate their feasibility.

  15. Security in Computer Applications

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture addresses the following question: how to create secure software? The lecture starts with a definition of computer security and an explanation of why it is so difficult to achieve. It then introduces the main security principles (like least-privilege, or defense-in-depth) and discusses security in different phases of the software development cycle. The emphasis is put on the implementation part: most common pitfalls and security bugs are listed, followed by advice on best practice for security development. The last part of the lecture covers some miscellaneous issues like the use of cryptography, rules for networking applications, and social engineering threats. This lecture was first given on Thursd...

  16. Project plan for computing

    CERN Document Server

    Harvey, J

    1998-01-01

    The LHCB Computing Project covers both on- and off-line activities. Nine sub-projects are identified, six of which correspond to specific applications, such as Reconstruction, DAQ etc., one takes charge of developing components that can be classed as of common interest to the various applications, and two which take responsibility for supporting the software development environment and computing infrastructure respectively. A Steering Group, comprising the convenors of the nine subprojects and the overall Computing Co-ordinator, is responsible for project management and planning. The planning assumes four life-cycle phases; preparation, implementation, commissioning and operation. A global planning chart showing the timescales of each phase is included. A more detailed chart for the planning of the introduction of Object Technologies is also described. Manpower requirements are given for each sub-project in terms of task description and FTEs needed. The evolution of these requirements with time is also given....

  17. The implementation of bit-parallelism for DNA sequence alignment

    Science.gov (United States)

    Setyorini; Kuspriyanto; Widyantoro, D. H.; Pancoro, A.

    2017-05-01

    Dynamic Programming (DP) remain the central algorithm of biological sequence alignment. Matching score computation is the most time-consuming process. Bit-parallelism is one of approximate string matching techniques that transform DP matrix cell unit processing into word unit (groups of cell). Bit-parallelism computate the scores column-wise. Adopting from word processing in computer system work, this technique promise reducing time in score computing process in DP matrix. In this paper, we implement bit-parallelism technique for DNA sequence alignment. Our bit-parallelism implementation have less time for score computational process but still need improvement for there construction process.

  18. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  19. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  20. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...

  1. Computational Deception

    NARCIS (Netherlands)

    Nijholt, Antinus; Acosta, P.S.; Cravo, P.

    2010-01-01

    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behaviour, and our

  2. Computational astrophysics

    Science.gov (United States)

    Miller, Richard H.

    1987-01-01

    Astronomy is an area of applied physics in which unusually beautiful objects challenge the imagination to explain observed phenomena in terms of known laws of physics. It is a field that has stimulated the development of physical laws and of mathematical and computational methods. Current computational applications are discussed in terms of stellar and galactic evolution, galactic dynamics, and particle motions.

  3. Garbageless reversible implementation of integer linear transformations

    DEFF Research Database (Denmark)

    Burignat, Stéphane; Vermeirsch, Kenneth; De Vos, Alexis

    2013-01-01

    Discrete linear transformations are important tools in information processing. Many such transforms are injective and therefore prime candidates for a physically reversible implementation into hardware. We present here reversible digital implementations of different integer transformations on four...... inputs. The resulting reversible circuit is able to perform both the forward transform and the inverse transform. Which of the two computations that actually is performed, simply depends on the orientation of the circuit when it is inserted in a computer board (if one takes care to provide...

  4. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  5. Una implementación computacional de un modelo de atención visual Bottom-up aplicado a escenas naturales/A Computational Implementation of a Bottom-up Visual Attention Model Applied to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Juan F. Ramírez Villegas

    2011-12-01

    Full Text Available El modelo de atención visual bottom-up propuesto por Itti et al., 2000 [1], ha sido un modelo popular en tanto exhibe cierta evidencia neurobiológica de la visión en primates. Este trabajo complementa el modelo computacional de este fenómeno desde la dinámica realista de una red neuronal. Asimismo, esta aproximación se basa en la existencia de mapas topográficos que representan la prominencia de los objetos del campo visual para la formación de una representación general (mapa de prominencia, esta representación es la entrada de una red neuronal dinámica con interacciones locales y globales de colaboración y competencia que convergen sobre las principales particularidades (objetos de la escena.The bottom-up visual attention model proposed by Itti et al. 2000 [1], has been a popular model since it exhibits certain neurobiological evidence of primates’ vision. This work complements the computational model of this phenomenon using a neural network with realistic dynamics. This approximation is based on several topographical maps representing the objects saliency that construct a general representation (saliency map, which is the input for a dynamic neural network, whose local and global collaborative and competitive interactions converge to the main particularities (objects presented by the visual scene as well.

  6. Implementation of Hardware Accelerators on Zynq

    OpenAIRE

    Toft, Jakob Kenn; Nannarelli, Alberto

    2016-01-01

    In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented o...

  7. Quantum Computation Beyond the Circuit Model

    OpenAIRE

    Jordan, Stephen P.

    2008-01-01

    The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary background material for a ge...

  8. Implementation of a high-sensitivity micro-angiographic fluoroscope (HS-MAF) for in-vivo endovascular image guided interventions (EIGI) and region-of-interest computed tomography (ROI-CT)

    Science.gov (United States)

    Ionita, C. N.; Keleshis, C.; Patel, V.; Yadava, G.; Hoffmann, K. R.; Bednarek, D. R.; Jain, A.; Rudin, S.

    2008-03-01

    New advances in catheter technology and remote actuation for minimally invasive procedures are continuously increasing the demand for better x-ray imaging technology. The new x-ray high-sensitivity Micro-Angiographic Fluoroscope (HS-MAF) detector offers high resolution and real-time image-guided capabilities which are unique when compared with commercially available detectors. This detector consists of a 300 μm CsI input phosphor coupled to a dual stage GEN2 micro-channel plate light image intensifier (LII), followed by minifying fiber-optic taper coupled to a CCD chip. The HS-MAF detector image array is 1024X1024 pixels, with a 12 bit depth capable of imaging at 30 frames per second. The detector has a round field of view with 4 cm diameter and 35 microns pixels. The LII has a large variable gain which allows usage of the detector at very low exposures characteristic of fluoroscopic ranges while maintaining very good image quality. The custom acquisition program allows real-time image display and data storage. We designed a set of in-vivo experimental interventions in which placement of specially designed endovascular stents were evaluated with the new detector and with a standard x-ray image intensifier (XII). Capabilities such fluoroscopy, angiography and ROI-CT reconstruction using rotational angiography data were implemented and verified. The images obtained during interventions under radiographic control with the HS-MAF detector were superior to those with the XII. In general, the device feature markers, the device structures, and the vessel geometry were better identified with the new detector. High-resolution detectors such as HS-MAF can vastly improve the accuracy of localization and tracking of devices such stents or catheters.

  9. Mobile computing initiatives within pharmacy education.

    Science.gov (United States)

    Cain, Jeff; Bird, Eleanora R; Jones, Mikael

    2008-08-15

    To identify mobile computing initiatives within pharmacy education, including how devices are obtained, supported, and utilized within the curriculum. An 18-item questionnaire was developed and delivered to academic affairs deans (or closest equivalent) of 98 colleges and schools of pharmacy. Fifty-four colleges and schools completed the questionnaire for a 55% completion rate. Thirteen of those schools have implemented mobile computing requirements for students. Twenty schools reported they were likely to formally consider implementing a mobile computing initiative within 5 years. Numerous models of mobile computing initiatives exist in terms of device obtainment, technical support, infrastructure, and utilization within the curriculum. Responders identified flexibility in teaching and learning as the most positive aspect of the initiatives and computer-aided distraction as the most negative, Numerous factors should be taken into consideration when deciding if and how a mobile computing requirement should be implemented.

  10. Chromatin computation.

    Directory of Open Access Journals (Sweden)

    Barbara Bryant

    Full Text Available In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this "chromatin computer" to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal--and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines.

  11. Compute Canada: Advancing Computational Research

    Science.gov (United States)

    Baldwin, Susan

    2012-02-01

    High Performance Computing (HPC) is redefining the way that research is done. Compute Canada's HPC infrastructure provides a national platform that enables Canadian researchers to compete on an international scale, attracts top talent to Canadian universities and broadens the scope of research.

  12. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  13. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  14. Computing methods

    CERN Document Server

    Berezin, I S

    1965-01-01

    Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the

  15. Computational Viscoelasticity

    CERN Document Server

    Marques, Severino P C

    2012-01-01

    This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation  and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.

  16. Computer Registration Becoming Mandatory

    CERN Multimedia

    2003-01-01

    Following the decision by the CERN Management Board (see Weekly Bulletin 38/2003), registration of all computers connected to CERN's network will be enforced and only registered computers will be allowed network access. The implementation has started with the IT buildings, continues with building 40 and the Prevessin site (as of Tuesday 4th November 2003), and will cover the whole of CERN before the end of this year. We therefore recommend strongly that you register all your computers in CERN's network database including all network access cards (Ethernet AND wireless) as soon as possible without waiting for the access restriction to take force. This will allow you accessing the network without interruption and help IT service providers to contact you in case of problems (e.g. security problems, viruses, etc.) Users WITH a CERN computing account register at: http://cern.ch/register/ (CERN Intranet page) Visitors WITHOUT a CERN computing account (e.g. short term visitors) register at: http://cern.ch/regis...

  17. Computer Registration Becoming Mandatory

    CERN Multimedia

    2003-01-01

    Following the decision by the CERN Management Board (see Weekly Bulletin 38/2003), registration of all computers connected to CERN's network will be enforced and only registered computers will be allowed network access. The implementation has started with the IT buildings, continues with building 40 and the Prevessin site (as of Tuesday 4th November 2003), and will cover the whole of CERN before the end of this year. We therefore recommend strongly that you register all your computers in CERN's network database (Ethernet and wire-less cards) as soon as possible without waiting for the access restriction to take force. This will allow you accessing the network without interruption and help IT service providers to contact you in case of problems (security problems, viruses, etc.) • Users WITH a CERN computing account register at: http://cern.ch/register/ (CERN Intranet page) • Visitors WITHOUT a CERN computing account (e.g. short term visitors) register at: http://cern.ch/registerVisitorComp...

  18. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  19. Technology Implementation Plan

    DEFF Research Database (Denmark)

    Jensen, Karsten Ingerslev; Schultz, Jørgen Munthe

    The Technology Implementation Plan (TIP) describes the main project results and the intended future use. The TIP is confidential.......The Technology Implementation Plan (TIP) describes the main project results and the intended future use. The TIP is confidential....

  20. Implementing Student Information Systems

    Science.gov (United States)

    Sullivan, Laurie; Porter, Rebecca

    2006-01-01

    Implementing an enterprise resource planning system is a complex undertaking. Careful planning, management, communication, and staffing can make the difference between a successful and unsuccessful implementation. (Contains 3 tables.)

  1. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  2. Computing Religion

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal

    2012-01-01

    The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...

  3. Ecodesign Implementation and LCA

    DEFF Research Database (Denmark)

    McAloone, Tim C.; Pigosso, Daniela Cristina Antelmi

    2018-01-01

    implementation into manufacturing companies. Existing methods and tools for ecodesign implementation will be described, focusing on a multifaceted approach to environmental improvement through product development. Additionally, the use of LCA in an ecodesign implementation context will be further described...... in terms of the challenges and opportunities, together with the discussion of a selection of simplified LCA tools. Finally, a seven-step approach for ecodesign implementation which has been applied by several companies will be described....

  4. COMPUTERS HAZARDS

    Directory of Open Access Journals (Sweden)

    Andrzej Augustynek

    2007-01-01

    Full Text Available In June 2006, over 12.6 million Polish users of the Web registered. On the average, each of them spent 21 hours and 37 minutes monthly browsing the Web. That is why the problems of the psychological aspects of computer utilization have become an urgent research subject. The results of research into the development of Polish information society carried out in AGH University of Science and Technology, under the leadership of Leslaw H. Haber, in the period from 2000 until present time, indicate the emergence dynamic changes in the ways of computer utilization and their circumstances. One of the interesting regularities has been the inverse proportional relation between the level of computer skills and the frequency of the Web utilization.It has been found that in 2005, compared to 2000, the following changes occurred:- A significant drop in the number of students who never used computers and the Web;- Remarkable increase in computer knowledge and skills (particularly pronounced in the case of first years student- Decreasing gap in computer skills between students of the first and the third year; between male and female students;- Declining popularity of computer games.It has been demonstrated also that the hazard of computer screen addiction was the highest in he case of unemployed youth outside school system. As much as 12% of this group of young people were addicted to computer. A lot of leisure time that these youths enjoyed inducted them to excessive utilization of the Web. Polish housewives are another population group in risk of addiction to the Web. The duration of long Web charts carried out by younger and younger youths has been another matter of concern. Since the phenomenon of computer addiction is relatively new, no specific therapy methods has been developed. In general, the applied therapy in relation to computer addition syndrome is similar to the techniques applied in the cases of alcohol or gambling addiction. Individual and group

  5. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  6. Implementation of Axiomatic Language

    OpenAIRE

    Wilson, Walter W.

    2011-01-01

    This report summarizes a PhD research effort to implement a type of logic programming language called "axiomatic language". Axiomatic language is intended as a specification language, so its implementation involves the transformation of specifications to efficient algorithms. The language is described and the implementation task is discussed.

  7. Environmental Protection Implementation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Brekke, D.D.

    1994-01-01

    This Environmental Protection Implementation Plan is intended to ensure that the environmental program objectives of Department of Energy Order 5400.1 are achieved at SNL/California. The Environmental Protection Implementation Plan serves as an aid to management and staff to implement new environmental programs in a timely manner.

  8. Computational oncology.

    Science.gov (United States)

    Lefor, Alan T

    2011-08-01

    Oncology research has traditionally been conducted using techniques from the biological sciences. The new field of computational oncology has forged a new relationship between the physical sciences and oncology to further advance research. By applying physics and mathematics to oncologic problems, new insights will emerge into the pathogenesis and treatment of malignancies. One major area of investigation in computational oncology centers around the acquisition and analysis of data, using improved computing hardware and software. Large databases of cellular pathways are being analyzed to understand the interrelationship among complex biological processes. Computer-aided detection is being applied to the analysis of routine imaging data including mammography and chest imaging to improve the accuracy and detection rate for population screening. The second major area of investigation uses computers to construct sophisticated mathematical models of individual cancer cells as well as larger systems using partial differential equations. These models are further refined with clinically available information to more accurately reflect living systems. One of the major obstacles in the partnership between physical scientists and the oncology community is communications. Standard ways to convey information must be developed. Future progress in computational oncology will depend on close collaboration between clinicians and investigators to further the understanding of cancer using these new approaches.

  9. Computer viruses

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, F.B.

    1986-01-01

    This thesis investigates a recently discovered vulnerability in computer systems which opens the possibility that a single individual with an average user's knowledge could cause widespread damage to information residing in computer networks. This vulnerability is due to a transitive integrity corrupting mechanism called a computer virus which causes corrupted information to spread from program to program. Experiments have shown that a virus can spread at an alarmingly rapid rate from user to user, from system to system, and from network to network, even when the best-availability security techniques are properly used. Formal definitions of self-replication, evolution, viruses, and protection mechanisms are used to prove that any system that allows sharing, general functionality, and transitivity of information flow cannot completely prevent viral attack. Computational aspects of viruses are examined, and several undecidable problems are shown. It is demonstrated that a virus may evolve so as to generate any computable sequence. Protection mechanisms are explored, and the design of computer networks that prevent both illicit modification and dissemination of information are given. Administration and protection of information networks based on partial orderings are examined, and probably correct automated administrative assistance is introduced.

  10. Chromatin Computation

    Science.gov (United States)

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  11. International Conference on Computer, Communication and Computational Sciences

    CERN Document Server

    Mishra, Krishn; Tiwari, Shailesh; Singh, Vivek

    2017-01-01

    Exchange of information and innovative ideas are necessary to accelerate the development of technology. With advent of technology, intelligent and soft computing techniques came into existence with a wide scope of implementation in engineering sciences. Keeping this ideology in preference, this book includes the insights that reflect the ‘Advances in Computer and Computational Sciences’ from upcoming researchers and leading academicians across the globe. It contains high-quality peer-reviewed papers of ‘International Conference on Computer, Communication and Computational Sciences (ICCCCS 2016), held during 12-13 August, 2016 in Ajmer, India. These papers are arranged in the form of chapters. The content of the book is divided into two volumes that cover variety of topics such as intelligent hardware and software design, advanced communications, power and energy optimization, intelligent techniques used in internet of things, intelligent image processing, advanced software engineering, evolutionary and ...

  12. From Greeks to Today: Cipher Trees and Computer Cryptography.

    Science.gov (United States)

    Grady, M. Tim; Brumbaugh, Doug

    1988-01-01

    Explores the use of computers for teaching mathematical models of transposition ciphers. Illustrates the ideas, includes activities and extensions, provides a mathematical model and includes computer programs to implement these topics. (MVL)

  13. Dimensions Of Security Threats In Cloud Computing: A Case Study

    National Research Council Canada - National Science Library

    Mathew Nicho; Mahmoud Hendy

    2013-01-01

      Even though cloud computing, as a model, is not new, organizations are increasingly implementing it because of its large-scale computation and data storage, flexible scalability, relative reliability...

  14. Optics in computing: introduction to the feature issue.

    Science.gov (United States)

    Drabik, T; Thienpont, H; Ishikawa, M

    2000-02-10

    This issue of Applied Optics features 21 papers that describe the implementation of optics in computer systems and applications. This feature is the eighth in a series on the application of optics in the field of computing.

  15. Implementing XML Schema Naming and Design Rules

    Energy Technology Data Exchange (ETDEWEB)

    Lubell, Joshua [National Institute of Standards and Technology (NIST); Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Harvey, Betty [Electronic Commerce Connection, Inc.

    2006-08-01

    We are building a methodology and tool kit for encoding XML schema Naming and Design Rules (NDRs) in a computer-interpretable fashion, enabling automated rule enforcement and improving schema quality. Through our experience implementing rules from various NDR specifications, we discuss some issues and offer practical guidance to organizations grappling with NDR development.

  16. Implementation of a Computerized Maintenance Management System

    Science.gov (United States)

    Shen, Yong-Hong; Askari, Bruce

    1994-01-01

    A primer Computerized Maintenance Management System (CMMS) has been established for NASA Ames pressure component certification program. The CMMS takes full advantage of the latest computer technology and SQL relational database to perform periodic services for vital pressure components. The Ames certification program is briefly described and the aspects of the CMMS implementation are discussed as they are related to the certification objectives.

  17. Designing, Implementing, and Evaluating Secure Web Browsers

    Science.gov (United States)

    Grier, Christopher L.

    2009-01-01

    Web browsers are plagued with vulnerabilities, providing hackers with easy access to computer systems using browser-based attacks. Efforts that retrofit existing browsers have had limited success since modern browsers are not designed to withstand attack. To enable more secure web browsing, we design and implement new web browsers from the ground…

  18. Standard of materials specifications, their implementation and ...

    African Journals Online (AJOL)

    Standard of materials specifications, their implementation and enforcement on building construction projects in Nigeria. ... helpful Frequently Asked Questions about PDFs. Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the ...

  19. Computational creativity

    Directory of Open Access Journals (Sweden)

    López de Mántaras Badia, Ramon

    2013-12-01

    Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.

  20. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism....... Several areas in the numerical linear algebra field are investigated and they illustrate the problems that arise as well as the techniques that are related to the use of massively parallel computers: 1.Study of Strassen's matrix-matrix multiplication on the Connection Machine model CM-200. What...... performance can we expect to achieve? Why? 2.Solving systems of linear equations using a Strassen-type matrix-inversion algorithm. A good way to solve systems of linear equations on massively parallel computers? 3.Aspects of computing the singular value decomposition on the Connec-tion Machine CM-5/CM-5E...