WorldWideScience

Sample records for performance embedded computing

  1. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  2. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  3. Implementation of an embedded computer

    OpenAIRE

    Pikl, Bojan

    2011-01-01

    The goal of this thesis is to describe a production of an embedded computer. The thesis describes development and production of an embedded computer for the medical diode laser DL30 that is being developed in Robomed d.o.o.. The first part of the thesis describes the choice of hardware devices. I mostly describe the technologies that one can buy on the market. Moreover for every part of the computer installed and developed there is an argument why we selected that exact part. The second part ...

  4. Proceedings of the High Performance Embedded Computing Workshop (HPEC 2006) (10th). Held in Lexington, Massachusetts on September 19-21, 2006 (CD-ROM)

    National Research Council Canada - National Science Library

    Kepner, Jeremy

    2007-01-01

    ...: 1 CD-ROM; 4 3/4 in.; 78.3 MB. ABSTRACT: The High-Performance Embedded Computing (HPEC) technical committee announced the tenth annual HPEC Workshop held in September 2006 at MIT Lincoln Laboratory in Lexington, MA...

  5. Air Force Science & Technology Issues & Opportunities Regarding High Performance Embedded Computing

    Science.gov (United States)

    2009-09-23

    price-performance advantage include: large scale simulations of neuromorphic computing models GOTCHA radar video SAR for wide area persistent...the handcuffs were not for me and that the military had so far got … Neuromorphic example: Robust recognition of occluded text Gotcha SAR PCID Image...Architecture 16 cores / chip 10 x 10 stacks / board50 chips / stack EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM

  6. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  7. Very High-Performance Embedded Computing Will Allow Ambitious Space Science Investigation

    National Research Council Canada - National Science Library

    Pignol, Michel

    2005-01-01

    .... developed on radiation tolerant technologies. Unfortunately, the microprocessors today available on such technologies have the computing throughput which was available about 10 years ago on the commercial market...

  8. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    Science.gov (United States)

    2010-07-22

    Memory Systems: Cadle. DRAM, Disk. Morgan Kaufmann , 2007. (2 1) A. Joshi, C. Ballen, Y.-J. Kwon. S . Beamcr, I. Shamim . K. Asano\\’ic, and V...COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION

  9. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  10. Embedding Moodle into Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus

    2010-01-01

    Glahn, C., & Specht, M. (2010). Embedding Moodle into Ubiquitous Computing Environments. In M. Montebello, et al. (Eds.), 9th World Conference on Mobile and Contextual Learning (MLearn2010) (pp. 100-107). October, 19-22, 2010, Valletta, Malta.

  11. Embedded, everywhere: a research agenda for networked systems of embedded computers

    National Research Council Canada - National Science Library

    Committee on Networked Systems of Embedded Computers; National Research Council Staff; Division on Engineering and Physical Sciences; Computer Science and Telecommunications Board; National Academy of Sciences

    2001-01-01

    .... Embedded, Everywhere explores the potential of networked systems of embedded computers and the research challenges arising from embedding computation and communications technology into a wide variety of applicationsâ...

  12. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  13. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  14. Embedded computer systems for control applications in EBR-II

    International Nuclear Information System (INIS)

    Carlson, R.B.; Start, S.E.

    1993-01-01

    The purpose of this paper is to describe the embedded computer systems approach taken at Experimental Breeder Reactor II (EBR-II) for non-safety related systems. The hardware and software structures for typical embedded systems are presented The embedded systems development process is described. Three examples are given which illustrate typical embedded computer applications in EBR-II

  15. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  16. Verification and Performance Analysis for Embedded Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2009-01-01

    This talk provides a thorough tutorial of the UPPAAL tool suite for, modeling, simulation, verification, optimal scheduling, synthesis, testing and performance analysis of embedded and real-time systems.......This talk provides a thorough tutorial of the UPPAAL tool suite for, modeling, simulation, verification, optimal scheduling, synthesis, testing and performance analysis of embedded and real-time systems....

  17. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  18. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  19. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  20. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  1. Messaging Performance of FIPA Interaction Protocols in Networked Embedded Controllers

    Directory of Open Access Journals (Sweden)

    García JoséAPérez

    2008-01-01

    Full Text Available Abstract Agent-based technologies in production control systems could facilitate seamless reconfiguration and integration of mechatronic devices/modules into systems. Advances in embedded controllers which are continuously improving computational capabilities allow for software modularization and distribution of decisions. Agent platforms running on embedded controllers could hide the complexity of bootstrap and communication. Therefore, it is important to investigate the messaging performance of the agents whose main motivation is the resource allocation in manufacturing systems (i.e., conveyor system. The tests were implemented using the FIPA-compliant JADE-LEAP agent platform. Agent containers were distributed through networked embedded controllers, and agents were communicating using request and contract-net FIPA interaction protocols. The test scenarios are organized in intercontainer and intracontainer communications. The work shows the messaging performance for the different test scenarios using both interaction protocols.

  2. Messaging Performance of FIPA Interaction Protocols in Networked Embedded Controllers

    Directory of Open Access Journals (Sweden)

    Omar Jehovani López Orozco

    2007-12-01

    Full Text Available Agent-based technologies in production control systems could facilitate seamless reconfiguration and integration of mechatronic devices/modules into systems. Advances in embedded controllers which are continuously improving computational capabilities allow for software modularization and distribution of decisions. Agent platforms running on embedded controllers could hide the complexity of bootstrap and communication. Therefore, it is important to investigate the messaging performance of the agents whose main motivation is the resource allocation in manufacturing systems (i.e., conveyor system. The tests were implemented using the FIPA-compliant JADE-LEAP agent platform. Agent containers were distributed through networked embedded controllers, and agents were communicating using request and contract-net FIPA interaction protocols. The test scenarios are organized in intercontainer and intracontainer communications. The work shows the messaging performance for the different test scenarios using both interaction protocols.

  3. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  4. Computers as Components Principles of Embedded Computing System Design

    CERN Document Server

    Wolf, Wayne

    2008-01-01

    This book was the first to bring essential knowledge on embedded systems technology and techniques under a single cover. This second edition has been updated to the state-of-the-art by reworking and expanding performance analysis with more examples and exercises, and coverage of electronic systems now focuses on the latest applications. Researchers, students, and savvy professionals schooled in hardware or software design, will value Wayne Wolf's integrated engineering design approach.The second edition gives a more comprehensive view of multiprocessors including VLIW and superscalar archite

  5. Rad-hard embedded computers for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.

    1993-01-01

    For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs

  6. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  7. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  8. Fault tolerant embedded computers and power electronics for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Robiolle, M.

    1995-01-01

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs

  9. Fault tolerant embedded computers and power electronics for nuclear robotics

    Energy Technology Data Exchange (ETDEWEB)

    Giraud, A.; Robiolle, M.

    1995-12-31

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs.

  10. Perbandingan Kemampuan Embedded Computer dengan General Purpose Computer untuk Pengolahan Citra

    Directory of Open Access Journals (Sweden)

    Herryawan Pujiharsono

    2017-08-01

    Full Text Available Perkembangan teknologi komputer membuat pengolahan citra saat ini banyak dikembangkan untuk dapat membantu manusia di berbagai bidang pekerjaan. Namun, tidak semua bidang pekerjaan dapat dikembangkan dengan pengolahan citra karena tidak mendukung penggunaan komputer sehingga mendorong pengembangan pengolahan citra dengan mikrokontroler atau mikroprosesor khusus. Perkembangan mikrokontroler dan mikroprosesor memungkinkan pengolahan citra saat ini dapat dikembangkan dengan embedded computer atau single board computer (SBC. Penelitian ini bertujuan untuk menguji kemampuan embedded computer dalam mengolah citra dan membandingkan hasilnya dengan komputer pada umumnya (general purpose computer. Pengujian dilakukan dengan mengukur waktu eksekusi dari empat operasi pengolahan citra yang diberikan pada sepuluh ukuran citra. Hasil yang diperoleh pada penelitian ini menunjukkan bahwa optimasi waktu eksekusi embedded computer lebih baik jika dibandingkan dengan general purpose computer dengan waktu eksekusi rata-rata embedded computer adalah 4-5 kali waktu eksekusi general purpose computer dan ukuran citra maksimal yang tidak membebani CPU terlalu besar untuk embedded computer adalah 256x256 piksel dan untuk general purpose computer adalah 400x300 piksel.

  11. Rad-hard embedded computers for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Joffre, F.; Marceau, M.; Robiolle, M.; Brunet, J.P.; Mijuin, D.

    1994-01-01

    Nuclear industries require robots with embedded rad hard electronics and high reliability. The SYROCO research program allowed to perform efficient industrial prototypes, build according to MICADO architecture, and to design CADMOS architecture. MICADO architecture uses the auto healing property that have CMOS circuits when being switched off during irradiation. (D.L.). 8 refs., 5 figs

  12. Disclosive Computer Ethics: The Exposure and Evaluation of Embedded Normativity in Computer Technology.

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2000-01-01

    This essay provides a critique of mainstream computer ethics and argues for the importance of a complementary approach called disclosive computer ethics, which is concerned with the moral deciphering of embedded values and norms in computer systems, applications and practices. Also, four key values

  13. Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework

    Science.gov (United States)

    Lin, Cheng-Min

    Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.

  14. An embedded single-board computer for BPM of SSRF

    International Nuclear Information System (INIS)

    Chen Kai; Liu Shubin; Yan Han; Wu Weihao; Zhao Lei; An Qi; Leng Yongbin; Yi Xing; Yan Yingbing; Lai Longwei

    2011-01-01

    An embedded single-board computer (SBC) system based on AT91RM9200 was designed for monitoring and controlling the digital beam position monitor system of Shanghai Synchrotron Radiation Facility (SSRF) through the Virtex-4 FPGA in the digital processing board. The SBC transfers the configuration commands from the remote EPICS to the FPGA, and calculates the beam position data. The interface between the FPGA and the SBC is the Static Memory Controller (SMC) with a peak transfer speed of up to 349 Mbps. The 100 Mb Ethernet is used for data transfer between the EPICS and SBC board, and a serial port serves as monitoring the status of the embedded system. Test results indicate that the SBC board functions well. (authors)

  15. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  16. Computer Game Play as an Imaginary Stage for Reading: Implicit Spatial Effects of Computer Games Embedded in Hard Copy Books

    Science.gov (United States)

    Smith, Glenn Gordon

    2012-01-01

    This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…

  17. Integrating Embedded Computing Systems into High School and Early Undergraduate Education

    Science.gov (United States)

    Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.

    2011-01-01

    Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…

  18. Embedded Systems

    Indian Academy of Sciences (India)

    Embedded system, micro-con- troller ... Embedded systems differ from general purpose computers in many ... Low cost: As embedded systems are extensively used in con- .... operating systems for the desktop computers where scheduling.

  19. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  20. A low-cost high-performance embedded platform for accelerator controls

    International Nuclear Information System (INIS)

    Cleva, Stefano; Bogani, Alessio Igor; Pivetta, Lorenzo

    2012-01-01

    Over the last years the mobile and hand-held device market has seen a dramatic performance improvement of the microprocessors employed for these systems. As an interesting side effect, this brings the opportunity of adopting these microprocessors to build small low-cost embedded boards, featuring lots of processing power and input/output capabilities. Moreover, being capable of running a full featured operating system such as Gnu/Linux, and even a control system toolkit such as Tango, these boards can also be used in control systems as front-end or embedded computers. In order to evaluate the feasibility of this idea, an activity has started at Elettra to select, evaluate and validate a commercial embedded device able to guarantee production grade reliability, competitive costs and an open source platform. The preliminary results of this work are presented. (author)

  1. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    Science.gov (United States)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  2. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  3. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Daniel Pierre Bovet

    2008-01-01

    Full Text Available Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  4. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Betti Emiliano

    2008-01-01

    Full Text Available Abstract Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  5. A Middleware Platform for Providing Mobile and Embedded Computing Instruction to Software Engineering Students

    Science.gov (United States)

    Mattmann, C. A.; Medvidovic, N.; Malek, S.; Edwards, G.; Banerjee, S.

    2012-01-01

    As embedded software systems have grown in number, complexity, and importance in the modern world, a corresponding need to teach computer science students how to effectively engineer such systems has arisen. Embedded software systems, such as those that control cell phones, aircraft, and medical equipment, are subject to requirements and…

  6. Rad-hard embedded computers for nuclear robotics; Calculateurs durcis embarques pour la robotique nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Giraud, A; Joffre, F; Marceau, M; Robiolle, M; Brunet, J P; Mijuin, D

    1994-12-31

    For requirements of nuclear industries, it is necessary to use robots with embedded rad hard electronics and high level safety. The computer developed for french research program SYROCO is presented in this paper. (authors). 8 refs., 5 figs.

  7. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    We present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The design uses the flexibility of Field Programmable Gate Arrays (FPGAs) and the powerful Associative Memory Chip (ASIC) to achieve real-time performance. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain.

  8. A Simulation Approach for Performance Validation during Embedded Systems Design

    Science.gov (United States)

    Wang, Zhonglei; Haberl, Wolfgang; Herkersdorf, Andreas; Wechs, Martin

    Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.

  9. The selection of embedded computer using in the nuclear physics instruments

    International Nuclear Information System (INIS)

    Zhang Jianchuan; Nan Gangyang; Wang Yanyu; Su Hong

    2010-01-01

    It introduces the requirement for embedded PC and the benefits of using it in the experimental nuclear physics instrument developing and improving project. A cording to the specific requirements in the project of improving laboratory instruments. several kinds of embedded computer are compared and specifically tested. Thus, a x86 architecture embedded computer, which have ultra-low-power consumption and a small in size, is selected to be the main component of the controller using in the nuclear physics instrument, and this will be used in the high-speed data acquisition and electronic control system. (authors)

  10. High performance embedded system for real-time pattern matching

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  11. High performance embedded system for real-time pattern matching

    Energy Technology Data Exchange (ETDEWEB)

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  12. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturised version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory (AM) chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering...

  13. Enhanced performance of microfluidic soft pressure sensors with embedded solid microspheres

    Science.gov (United States)

    Shin, Hee-Sup; Ryu, Jaiyoung; Majidi, Carmel; Park, Yong-Lae

    2016-02-01

    The cross-sectional geometry of an embedded microchannel influences the electromechanical response of a soft microfluidic sensor to applied surface pressure. When a pressure is exerted on the surface of the sensor deforming the soft structure, the cross-sectional area of the embedded channel filled with a conductive fluid decreases, increasing the channel’s electrical resistance. This electromechanical coupling can be tuned by adding solid microspheres into the channel. In order to determine the influence of microspheres, we use both analytic and computational methods to predict the pressure responses of soft microfluidic sensors with two different channel cross-sections: a square and an equilateral triangular. The analytical models were derived from contact mechanics in which microspheres were regarded as spherical indenters, and finite element analysis (FEA) was used for simulation. For experimental validation, sensor samples with the two different channel cross-sections were prepared and tested. For comparison, the sensor samples were tested both with and without microspheres. All three results from the analytical models, the FEA simulations, and the experiments showed reasonable agreement confirming that the multi-material soft structure significantly improved its pressure response in terms of both linearity and sensitivity. The embedded solid particles enhanced the performance of soft sensors while maintaining their flexible and stretchable mechanical characteristic. We also provide analytical and experimental analyses of hysteresis of microfluidic soft sensors considering a resistive force to the shape recovery of the polymer structure by the embedded viscous fluid.

  14. Computer simulation of liquid cesium using embedded atom model

    International Nuclear Information System (INIS)

    Belashchenko, D K; Nikitin, N Yu

    2008-01-01

    The new method is presented for the inventing an embedded atom potential (EAM potential) for liquid metals. This method uses directly the pair correlation function (PCF) of the liquid metal near the melting temperature. Because of the specific analytic form of this EAM potential, the pair term of potential can be calculated using the pair correlation function and, for example, Schommers algorithm. Other parameters of EAM potential may be found using the potential energy, module of compression and pressure at some conditions, mainly near the melting temperature, at very high temperature or in strongly compressed state. We used the simple exponential formula for effective EAM electronic density and a polynomial series for embedding energy. Molecular dynamics method was applied with L. Verlet algorithm. A series of models with 1968 atoms in the basic cube was constructed in temperature interval 323-1923 K. The thermodynamic properties of liquid cesium, structure data and self-diffusion coefficients are calculated. In general, agreement between the model data and known experimental ones is reasonable. The evaluation is given for the critical temperature of cesium models with EAM potential

  15. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  16. An embedded implementation based on adaptive filter bank for brain-computer interface systems.

    Science.gov (United States)

    Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui

    2018-07-15

    Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Brain-Computer Interfacing Embedded in Intelligent and Affective Systems

    NARCIS (Netherlands)

    Nijholt, Antinus

    In this talk we survey recent research views on non-traditional brain-computer interfaces (BCI). That is, interfaces that can process brain activity input, but that are designed for the ‘general population’, rather than for clinical purposes. Control of applications can be made more robust by fusing

  18. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  19. A Trusted Computing Architecture of Embedded System Based on Improved TPM

    Directory of Open Access Journals (Sweden)

    Wang Xiaosheng

    2017-01-01

    Full Text Available The Trusted Platform Module (TPM currently used by PCs is not suitable for embedded systems, it is necessary to improve existing TPM. The paper proposes a trusted computing architecture with new TPM and the cryptographic system developed by China for the embedded system. The improved TPM consists of the Embedded System Trusted Cryptography Module (eTCM and the Embedded System Trusted Platform Control Module (eTPCM, which are combined and implemented the TPM’s autonomous control, active defense, high-speed encryption/decryption and other function through its internal bus arbitration module and symmetric and asymmetric cryptographic engines to effectively protect the security of embedded system. In our improved TPM, a trusted measurement method with chain model and star type model is used. Finally, the improved TPM is designed by FPGA, and it is used to a trusted PDA to carry out experimental verification. Experiments show that the trusted architecture of the embedded system based on the improved TPM is efficient, reliable and secure.

  20. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    Embedded systems are used in a broad range of applications that demand high performance within severely constrained mechanical, power, and cost requirements. Embedded systems implemented in ASIC technology tend to provide the highest performance, lowest power consumption and lowest unit cost. How...

  1. Computer-Related Task Performance

    DEFF Research Database (Denmark)

    Longstreet, Phil; Xiao, Xiao; Sarker, Saonee

    2016-01-01

    The existing information system (IS) literature has acknowledged computer self-efficacy (CSE) as an important factor contributing to enhancements in computer-related task performance. However, the empirical results of CSE on performance have not always been consistent, and increasing an individual......'s CSE is often a cumbersome process. Thus, we introduce the theoretical concept of self-prophecy (SP) and examine how this social influence strategy can be used to improve computer-related task performance. Two experiments are conducted to examine the influence of SP on task performance. Results show...... that SP and CSE interact to influence performance. Implications are then discussed in terms of organizations’ ability to increase performance....

  2. Smart device definition and application on embedded system: performance and optimi-zation on a RGBD sensor

    Directory of Open Access Journals (Sweden)

    Jose-Luis JIMÉNEZ-GARCÍA

    2014-10-01

    Full Text Available Embedded control systems usually are characterized by its limitations in terms of computational power and memory. Although this systems must deal with perpection and actuation signal adaptation and calculate control actions ensuring its reliability and providing a certain degree of fault tolerance. The allocation of these tasks between some different embedded nodes conforming a distributed control system allows to solve many of these issues. For that reason is proposed the application of smart devices aims to perform the data processing tasks related with the perception and actuation and offer a simple interface to be configured by other nodes in order to share processed information and raise QoS based alarms. In this work is introduced the procedure of implementing a smart device as a sensor as an embedded node in a distributed control system. In order to analyze its benefits an application based on a RGBD sensor implemented as an smart device is proposed.

  3. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  4. 7th International Conference on Embedded and Multimedia Computing (EMC-12)

    CERN Document Server

    Jeong, Young-Sik; Park, Sang; Chen, Hsing-Chung; Embedded and Multimedia Computing Technology and Service

    2012-01-01

    The 7th International Conference on Embedded and Multimedia Computing (EMC-12), will be held in Gwangju, Korea on September 6 - 8, 2012. EMC-12 will be the most comprehensive conference focused on the various aspects of advances in Embedded and Multimedia (EM) Computing. EMC-12 will provide an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of EM. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in EM. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject. The EMC-12 is the next event, in a series of highly successful International Conference on Embedded and Multimedia Computing, previously held as EMC 2011 (China, Aug. 2011), EMC 2010 (Philippines, Aug. 2010), EM-Com 2009 (Korea, Dec. 2009), UMC-08 (Australia, Oct. 2008), ESO-08(China, Dec. 2008), UMS-08 ...

  5. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  6. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  7. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... does the design time and eort. This challenge is widely recognized throughout academia and the industry and in order to address this, novel frameworks and methods, which will automate design steps as well as raise the level of abstraction used to design systems, are being called upon. To support...... is carried out in collaboration with the Danish company and DaNES partner, Bang & Olufsen ICEpower. Bang & Olufsen ICEpower provides industrial case studies which will allow the proposed modelling framework to be exercised and assessed in terms of ease of use, production speed, accuracy and efficiency...

  8. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  9. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    Science.gov (United States)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  10. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    Directory of Open Access Journals (Sweden)

    Marijan Beg

    2017-05-01

    Full Text Available Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i the re-compilation of source code, (ii the use of configuration files, (iii the graphical user interface, and (iv embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF. We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  11. Computing the Dilation of Edge-Augmented Graphs Embedded in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G = (V,E) be an undirected graph with n vertices embedded in a metric space. We consider the problem of adding a shortcut edge in G that minimizes the dilation of the resulting graph. The fastest algorithm to date for this problem has O(n^4) running time and uses O(n^2) space. We show how...... to improve running time to O(n^3*log n) while maintaining quadratic space requirement. In fact, our algorithm not only determines the best shortcut but computes the dilation of G U {(u,v)} for every pair of distinct vertices u and v....

  12. Improving engineers' performance with computers

    International Nuclear Information System (INIS)

    Purvis, E.E. III

    1984-01-01

    The problem addressed is how to improve the performance of engineers in the design, operation, and maintenance of nuclear power plants. The application of computer science to this problem offers a challenge in maximizing the use of developments outside the nuclear industry and setting priorities to address the most fruitful areas first. Areas of potential benefits include data base management through design, analysis, procurement, construction, operation maintenance, cost, schedule and interface control and planning, and quality engineering on specifications, inspection, and training

  13. Performance Estimation for Embedded Systems with Data and Control Dependencies

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2000-01-01

    In this paper we present an approach to performance estimation for hard real-time systems. We consider architectures consisting of multiple processors. The scheduling policy is based on a preemptive strategy with static priorities. Our model of the system captures both data and control dependencies...

  14. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  15. Potential Functional Embedding Theory at the Correlated Wave Function Level. 2. Error Sources and Performance Tests.

    Science.gov (United States)

    Cheng, Jin; Yu, Kuang; Libisch, Florian; Dieterich, Johannes M; Carter, Emily A

    2017-03-14

    Quantum mechanical embedding theories partition a complex system into multiple spatial regions that can use different electronic structure methods within each, to optimize trade-offs between accuracy and cost. The present work incorporates accurate but expensive correlated wave function (CW) methods for a subsystem containing the phenomenon or feature of greatest interest, while self-consistently capturing quantum effects of the surroundings using fast but less accurate density functional theory (DFT) approximations. We recently proposed two embedding methods [for a review, see: Acc. Chem. Res. 2014 , 47 , 2768 ]: density functional embedding theory (DFET) and potential functional embedding theory (PFET). DFET provides a fast but non-self-consistent density-based embedding scheme, whereas PFET offers a more rigorous theoretical framework to perform fully self-consistent, variational CW/DFT calculations [as defined in part 1, CW/DFT means subsystem 1(2) is treated with CW(DFT) methods]. When originally presented, PFET was only tested at the DFT/DFT level of theory as a proof of principle within a planewave (PW) basis. Part 1 of this two-part series demonstrated that PFET can be made to work well with mixed Gaussian type orbital (GTO)/PW bases, as long as optimized GTO bases and consistent electron-ion potentials are employed throughout. Here in part 2 we conduct the first PFET calculations at the CW/DFT level and compare them to DFET and full CW benchmarks. We test the performance of PFET at the CW/DFT level for a variety of types of interactions (hydrogen bonding, metallic, and ionic). By introducing an intermediate CW/DFT embedding scheme denoted DFET/PFET, we show how PFET remedies different types of errors in DFET, serving as a more robust type of embedding theory.

  16. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    Science.gov (United States)

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  17. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  18. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  19. Boys with autism spectrum disorders show superior performance on the adult Embedded Figures Test

    NARCIS (Netherlands)

    Schlooz, W.A.J.M.; Hulstijn, W.

    2014-01-01

    Weak central coherence is frequently studied using the Embedded Figures Test (EFT) yielding mixed and ambiguous results. In this study, the performance of 36 boys (9–14 years) with Autism Spectrum Disorders (ASD) is compared with that of 46 typical peers using both the children's and the adult

  20. The Artemis workbench for system-level performance evaluation of embedded systems

    NARCIS (Netherlands)

    Pimentel, A.D.

    2008-01-01

    In this paper, we present an overview of the Artemis workbench, which provides modelling and simulation methods and tools for efficient performance evaluation and exploration of heterogeneous embedded multimedia systems. More specifically, we describe the Artemis system-level modelling methodology,

  1. A study on the performance of piezoelectric composite materials for designing embedded transducers for concrete assessment

    Science.gov (United States)

    Dumoulin, Cédric; Deraemaeker, Arnaud

    2018-03-01

    Ultrasonic measurements of concrete can provide crucial information about its state of health. The most common practice in the construction industry consists in using external probes which strongly limits the use of the method since large parts of the in-service structures are difficult to access. It is also possible to assess in real time the setting process of the concrete using ultrasonic measurements. In practice, the field measurement of the concrete hardening is limited by the formworks. As an alternative, some research teams have studied the possibility to directly embed the transducers into the concrete structures. The current embedded ultrasonic transducers are of two categories: bulk piezoelectric elements surrounded by several coating and matching layers and composites piezoelectric elements. Both technologies aim at optimizing the wave energy transmitted to the tested medium. The performances of the transducers of the first kind have been studied in a previous study. A fair amount of recent research has been focused on the development of novel cement-based piezoelectric composites. In this study, we first compare the effective properties of such cement-based materials with more widespread composites made with matrices of epoxy resins or polyurethane. The study only concerns the 1-3 fiber arrangement composites. The effective properties are computed using both an analytical mixing rule method and a finite element based homogenization method using representative volume elements (RVEs) which allows for considering more realistic fiber arrangements, leading yet to very similar results. The effective piezoelectric properties of cement-based composites appear to be very low compared to composites made of epoxy or polyurethane. This result is underlined by looking at the acoustic response and the electric input impedance of different piezoelectric disks where we compare performances of such transducers with a low-cost bulk piezoelectric disc element. The first

  2. Building professionalism and employability skills: embedding employer engagement within first-year computing modules

    Science.gov (United States)

    Hanna, Philip; Allen, Angela; Kane, Russell; Anderson, Neil; McGowan, Aidan; Collins, Matthew; Hutchison, Malcolm

    2015-07-01

    This paper outlines a means of improving the employability skills of first-year university students through a closely integrated model of employer engagement within computer science modules. The outlined approach illustrates how employability skills, including communication, teamwork and time management skills, can be contextualised in a manner that directly relates to student learning but can still be linked forward into employment. The paper tests the premise that developing employability skills early within the curriculum will result in improved student engagement and learning within later modules. The paper concludes that embedding employer participation within first-year models can help relate a distant notion of employability into something of more immediate relevance in terms of how students can best approach learning. Further, by enhancing employability skills early within the curriculum, it becomes possible to improve academic attainment within later modules.

  3. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  4. Embedded Sensors and Controls to Improve Component Performance and Reliability -- Loop-scale Testbed Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kisner, Roger A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    Embedded instrumentation and control systems that can operate in extreme environments are challenging to design and operate. Extreme environments limit the options for sensors and actuators and degrade their performance. Because sensors and actuators are necessary for feedback control, these limitations mean that designing embedded instrumentation and control systems for the challenging environments of nuclear reactors requires advanced technical solutions that are not available commercially. This report details the development of testbed that will be used for cross-cutting embedded instrumentation and control research for nuclear power applications. This research is funded by the Department of Energy's Nuclear Energy Enabling Technology program's Advanced Sensors and Instrumentation topic. The design goal of the loop-scale testbed is to build a low temperature pump that utilizes magnetic bearing that will be incorporated into a water loop to test control system performance and self-sensing techniques. Specifically, this testbed will be used to analyze control system performance in response to nonlinear and cross-coupling fluid effects between the shaft axes of motion, rotordynamics and gyroscopic effects, and impeller disturbances. This testbed will also be used to characterize the performance losses when using self-sensing position measurement techniques. Active magnetic bearings are a technology that can reduce failures and maintenance costs in nuclear power plants. They are particularly relevant to liquid salt reactors that operate at high temperatures (700 C). Pumps used in the extreme environment of liquid salt reactors provide many engineering challenges that can be overcome with magnetic bearings and their associated embedded instrumentation and control. This report will give details of the mechanical design and electromagnetic design of the loop-scale embedded instrumentation and control testbed.

  5. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  6. Atypical neural substrates of Embedded Figures Task performance in children with Autism Spectrum Disorders

    OpenAIRE

    Lee, Philip S.; Foss-Feig, Jennifer; Henderson, Joshua G.; Kenworthy, Lauren E.; Gilotty, Lisa; Gaillard, William D.; Vaidya, Chandan J.

    2007-01-01

    Superior performance on the Embedded Figures Task (EFT) has been attributed to weak central coherence in perceptual processing in Autism Spectrum Disorders (ASD). The present study used functional magnetic resonance imaging to examine the neural basis of EFT performance in 7-12 year old ASD children and age and IQ matched controls. ASD children activated only a subset of the distributed network of regions activated in controls. In frontal cortex, control children activated left dorsolateral, ...

  7. Broadband EM Performance Characteristics of Single Square Loop FSS Embedded Monolithic Radome

    Directory of Open Access Journals (Sweden)

    Raveendranath U. Nair

    2013-01-01

    Full Text Available A monolithic half-wave radome panel, centrally loaded with aperture-type single square loop frequency selective surface (SSL-FSS, is proposed here for broadband airborne radome applications. Equivalent transmission line method in conjunction with equivalent circuit model (ECM is used for modeling the SSL-FSS embedded monolithic half-wave radome panel and evaluating radome performance parameters. The design parameters of the SSL-FSS are optimized at different angles of incidence such that the new radome wall configuration offers superior EM performance from L-band to X-band as compared to the conventional monolithic half-wave slab of identical material and thickness. The superior EM performance of SSL-FSS embedded monolithic radome wall makes it suitable for the design of normal incidence and streamlined airborne radomes.

  8. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  9. Using embedded computer-assisted instruction to teach science to students with Autism Spectrum Disorders

    Science.gov (United States)

    Smith, Bethany

    The need for promoting scientific literacy for all students has been the focus of recent education reform resulting in the rise of the Science Technology, Engineering, and Mathematics movement. For students with Autism Spectrum Disorders and intellectual disability, this need for scientific literacy is further complicated by the need for individualized instruction that is often required to teach new skills, especially when those skills are academic in nature. In order to address this need for specialized instruction, as well as scientific literacy, this study investigated the effects of embedded computer-assisted instruction to teach science terms and application of those terms to three middle school students with autism and intellectual disability. This study was implemented within an inclusive science classroom setting. A multiple probe across participants research design was used to examine the effectiveness of the intervention. Results of this study showed a functional relationship between the number of correct responses made during probe sessions and introduction of the intervention. Additionally, all three participants maintained the acquired science terms and applications over time and generalized these skills across materials and settings. The findings of this study suggest several implications for practice within inclusive settings and provide suggestions for future research investigating the effectiveness of computer-assisted instruction to teach academic skills to students with Autism Spectrum Disorders and intellectual disability.

  10. Comparison in performance of sediment microbial fuel cells according to depth of embedded anode.

    Science.gov (United States)

    An, Junyeong; Kim, Bongkyu; Nam, Jonghyeon; Ng, How Yong; Chang, In Seop

    2013-01-01

    Five rigid graphite plates were embedded in evenly divided sections of sediment, ranging from 2 cm (A1) to 10 cm (A5) below the top sediment layer. The maximum power and current of the MFCs increased in depth order; however, despite the increase in the internal resistance, the power and current density of the A5 MFC were 2.2 and 3.5 times higher, respectively, than those of the A1 MFC. In addition, the anode open circuit potentials (OCPs) of the sediment microbial fuel cells (SMFCs) became more negative with sediment depth. Based on these results, it could be then concluded that as the anode-embedding depth increases, that the anode environment is thermodynamically and kinetically favorable to anodophiles or electrophiles. Therefore, the anode-embedding depth should be considered an important parameter that determines the performance of SMFCs, and we posit that the anode potential could be one indicator for selecting the anode-embedding depth. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Investigation of protein adsorption performance of Ni2+-attached diatomite particles embedded in composite monolithic cryogels.

    Science.gov (United States)

    Ünlü, Nuri; Ceylan, Şeyda; Erzengin, Mahmut; Odabaşı, Mehmet

    2011-08-01

    As a low-cost natural adsorbent, diatomite (DA) (2 μm) has several advantages including high surface area, chemical reactivity, hydrophilicity and lack of toxicity. In this study, the protein adsorption performance of supermacroporous composite cryogels embedded with Ni(2+)-attached DA particles (Ni(2+)-ADAPs) was investigated. Supermacroporous poly(2-hydroxyethyl methacrylate) (PHEMA)-based monolithic composite cryogel column embedded with Ni(2+)-ADAPs was prepared by radical cryo-copolymerization of 2-hydroxyethyl methacrylate (HEMA) with N,N'-methylene-bis-acrylamide (MBAAm) as cross-linker directly in a plastic syringe for affinity purification of human serum albumin (HSA) both from aqueous solutions and human serum. The chemical composition and surface area of DA was determined by XRF and BET method, respectively. The characterization of composite cryogel was investigated by SEM. The effect of pH, and embedded Ni(2+)-ADAPs amount, initial HSA concentration, temperature and flow rate on adsorption were studied. The maximum amount of HSA adsorption from aqueous solution at pH 8.0 phosphate buffer was very high (485.15 mg/g DA). It was observed that HSA could be repeatedly adsorbed and desorbed to the embedded Ni(2+)-ADAPs in poly(2-hydroxyethyl methacrylate) composite cryogel without significant loss of adsorption capacity. The efficiency of albumin adsorption from human serum before and after albumin adsorption was also investigated with SDS-PAGE analyses. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Embedded Sensors and Controls to Improve Component Performance and Reliability: Conceptual Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Kisner, Roger A [ORNL; Melin, Alexander M [ORNL; Burress, Timothy A [ORNL; Fugate, David L [ORNL; Holcomb, David Eugene [ORNL; Wilgen, John B [ORNL; Miller, John M [ORNL; Wilson, Dane F [ORNL; Silva, Pamela C [ORNL; Whitlow, Lynsie J [ORNL; Peretz, Fred J [ORNL

    2012-10-01

    The overall project objective is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant components. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration vehicle. The project s focus is not primarily on pump design, but instead is on methods to deeply embed I&C within a pump system. However, because the I&C is intimately part of the basic millisecond-by-millisecond functioning of the pump, the I&C design cannot proceed in isolation from the other aspects of the pump. The pump will not function if the characteristics of the I&C are not embedded within the design because the I&C enables performance of the basic function rather than merely monitoring quasi-stable performance. Traditionally, I&C has been incorporated in nuclear power plant (NPP) components after their design is nearly complete; adequate performance was obtained through over-design. This report describes the progress and status of the project and provides a conceptual design overview for the embedded I&C pump.

  13. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    Science.gov (United States)

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the…

  14. Performance Analysis of Embedded Zero Tree and Set Partitioning in Hierarchical Tree

    OpenAIRE

    Pardeep Singh; Nivedita; Dinesh Gupta; Sugandha Sharma

    2012-01-01

    Compressing an image is significantly different than compressing raw binary data. For this different compression algorithm are used to compress images. Discrete wavelet transform has been widely used to compress the image. Wavelet transform are very powerful compared to other transform because its ability to describe any type of signals both in time and frequency domain simultaneously. The proposed schemes investigate the performance evaluation of embedded zero tree and wavelet based compress...

  15. Evaluation of Maintenance and EOL Operation Performance of Sensor-Embedded Laptops

    Directory of Open Access Journals (Sweden)

    Mehmet Talha Dulman

    2018-01-01

    Full Text Available Sensors are commonly employed to monitor products during their life cycles and to remotely and continuously track their usage patterns. Installing sensors into products can help generate useful data related to the conditions of products and their components, and this information can subsequently be used to inform EOL decision-making. As such, embedded sensors can enhance the performance of EOL product processing operations. The information collected by the sensors can also be used to estimate and predict product failures, thereby helping to improve maintenance operations. This paper describes a study in which system maintenance and EOL processes were combined and closed-loop supply chain systems were constructed to analyze the financial contribution that sensors can make to these procedures by using discrete event simulation to model and compare regular systems and sensor-embedded systems. The factors that had an impact on the performance measures, such as disassembly cost, maintenance cost, inspection cost, sales revenues, and profitability, were determined and a design of experiments study was carried out. The experiment results were compared, and pairwise t-tests were executed. The results reveal that sensor-embedded systems are significantly superior to regular systems in terms of the identified performance measures.

  16. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    Science.gov (United States)

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  17. Embedded Sensors and Controls to Improve Component Performance and Reliability Conceptual Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Kisner, R.; Melin, A.; Burress, T.; Fugate, D.; Holcomb, D.; Wilgen, J.; Miller, J.; Wilson, D.; Silva, P.; Whitlow, L.; Peretz, F.

    2012-09-15

    The objective of this project is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant (NPP) components and systems. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration platform. I&C is intimately part of the basic millisecond-by-millisecond functioning of the system; treating I&C as an integral part of the system design is innovative and will allow significant improvement in capabilities and performance. As systems become more complex and greater performance is required, traditional I&C design techniques become inadequate and more advanced I&C needs to be applied. New I&C techniques enable optimal and reliable performance and tolerance of noise and uncertainties in the system rather than merely monitoring quasistable performance. Traditionally, I&C has been incorporated in NPP components after the design is nearly complete; adequate performance was obtained through over-design. By incorporating I&C at the beginning of the design phase, the control system can provide superior performance and reliability and enable designs that are otherwise impossible. This report describes the progress and status of the project and provides a conceptual design overview for the platform to demonstrate the performance and reliability improvements enabled by advanced embedded I&C.

  18. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Reinholdt, Peter; Kongsted, Jacob; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    We analyze the performance of the polarizable density embedding (PDE) model-a new multiscale computational approach designed for prediction and rationalization of general molecular properties of large and complex systems. We showcase how the PDE model very effectively handles the use of large...

  19. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  20. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  1. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  2. Embedded Sensors and Controls to Improve Component Performance and Reliability -- Bench-scale Testbed Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kisner, Roger A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Drira, Anis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Reed, Frederick K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-09-01

    Embedded instrumentation and control systems that can operate in extreme environments are challenging due to restrictions on sensors and materials. As a part of the Department of Energy's Nuclear Energy Enabling Technology cross-cutting technology development programs Advanced Sensors and Instrumentation topic, this report details the design of a bench-scale embedded instrumentation and control testbed. The design goal of the bench-scale testbed is to build a re-configurable system that can rapidly deploy and test advanced control algorithms in a hardware in the loop setup. The bench-scale testbed will be designed as a fluid pump analog that uses active magnetic bearings to support the shaft. The testbed represents an application that would improve the efficiency and performance of high temperature (700 C) pumps for liquid salt reactors that operate in an extreme environment and provide many engineering challenges that can be overcome with embedded instrumentation and control. This report will give details of the mechanical design, electromagnetic design, geometry optimization, power electronics design, and initial control system design.

  3. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  4. Parametric analysis of electromechanical and fatigue performance of total knee replacement bearing with embedded piezoelectric transducers

    Science.gov (United States)

    Safaei, Mohsen; Meneghini, R. Michael; Anton, Steven R.

    2017-09-01

    Total knee arthroplasty is a common procedure in the United States; it has been estimated that about 4 million people are currently living with primary knee replacement in this country. Despite huge improvements in material properties, implant design, and surgical techniques, some implants fail a few years after surgery. A lack of information about in vivo kinetics of the knee prevents the establishment of a correlated intra- and postoperative loading pattern in knee implants. In this study, a conceptual design of an ultra high molecular weight (UHMW) knee bearing with embedded piezoelectric transducers is proposed, which is able to measure the reaction forces from knee motion as well as harvest energy to power embedded electronics. A simplified geometry consisting of a disk of UHMW with a single embedded piezoelectric ceramic is used in this work to study the general parametric trends of an instrumented knee bearing. A combined finite element and electromechanical modeling framework is employed to investigate the fatigue behavior of the instrumented bearing and the electromechanical performance of the embedded piezoelectric. The model is validated through experimental testing and utilized for further parametric studies. Parametric studies consist of the investigation of the effects of several dimensional and piezoelectric material parameters on the durability of the bearing and electrical output of the transducers. Among all the parameters, it is shown that adding large fillet radii results in noticeable improvement in the fatigue life of the bearing. Additionally, the design is highly sensitive to the depth of piezoelectric pocket. Finally, using PZT-5H piezoceramics, higher voltage and slightly enhanced fatigue life is achieved.

  5. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  6. Research of real-time performance based on VxWorks embedded system

    International Nuclear Information System (INIS)

    Liu Daming; Li Haiming

    2011-01-01

    In the research of mechanism and heating efficiency of Ion Cyclotron Range of Frequency (ICRF) heating, data acquisition system with high real-time performance needed. By the means of system logic analyzer, SPY and other relevant software on VxWorks embedded operating system for real-time testing gives real-time data of the system. Real-time level to achieve balances used time and processor idle time, real-time data acquisition, and minimize the interference of external to the system, ensure the system work in its own set of scheduling trajectory. Interrupt switching time and task context switching time meet the system requirements. (authors)

  7. US QCD computational performance studies with PERI

    International Nuclear Information System (INIS)

    Zhang, Y; Fowler, R; Huck, K; Malony, A; Porterfield, A; Reed, D; Shende, S; Taylor, V; Wu, X

    2007-01-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools

  8. Atypical neural substrates of Embedded Figures Task performance in children with Autism Spectrum Disorder.

    Science.gov (United States)

    Lee, Philip S; Foss-Feig, Jennifer; Henderson, Joshua G; Kenworthy, Lauren E; Gilotty, Lisa; Gaillard, William D; Vaidya, Chandan J

    2007-10-15

    Superior performance on the Embedded Figures Task (EFT) has been attributed to weak central coherence in perceptual processing in Autism Spectrum Disorder (ASD). The present study used functional magnetic resonance imaging to examine the neural basis of EFT performance in 7- to 12-year-old ASD children and age- and IQ-matched controls. ASD children activated only a subset of the distributed network of regions activated in controls. In frontal cortex, control children activated left dorsolateral, medial and dorsal premotor regions whereas ASD children only activated the dorsal premotor region. In parietal and occipital cortices, activation was bilateral in control children but unilateral (left superior parietal and right occipital) in ASD children. Further, extensive bilateral ventral temporal activation was observed in control, but not ASD children. ASD children performed the EFT at the same level as controls but with reduced cortical involvement, suggesting that disembedded visual processing is accomplished parsimoniously by ASD relative to typically developing brains.

  9. OPERATIONAL PERFORMANCES DEMONSTRATION OF POLYMER-CERAMIC EMBEDDED CAPACITORS FOR MMIC APPLICATIONS

    OpenAIRE

    Bord-Majek , Isabelle; Kertesz , Philippe; Mazeau , Julie; Caban-Chastas , Daniel; Levrier , Bruno; Bechou , Laurent; Ousten , Yves

    2011-01-01

    International audience; Embedded passives are becoming increasingly important for the manufacture of highly integrated electronic boards and packages. The need for embedded passives emerges from the growing consumer demand for product miniaturization thus requiring smaller components and space efficient packaging. This can be realized by replacing discrete components that demands a higher volume than embedded passives. Embedded passives have already been investigated in the last few years. Ho...

  10. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  11. Fabrication of highly dispersed ZnO nanoparticles embedded in graphene nanosheets for high performance supercapacitors

    International Nuclear Information System (INIS)

    Fang, Linxia; Zhang, Baoliang; Li, Wei; Zhang, Jizhong; Huang, Kejing; Zhang, Qiuyu

    2014-01-01

    We report a facile strategy to synthesize ZnO-graphene nanocomposites as an advanced electrode material for high-performance supercapacitors. The ZnO-graphene nanocomposites have been fabricated via a facile, low-temperature in situ wet chemistry process. During this process, high dispersed ZnO nanoparticles are embedded in graphene nanosheets, leading to sandwich-structured ZnO-graphene nanocomposites. Thus, intimate interfacial contact between ZnO nanoparticles and graphene nanosheets are achieved, which facilitates electrochemical activity and enhance electrochemical properties due to fast electron transfer. The as-prepared ZnO-graphene nanocomposites exhibit a maximum specific capacitance of 786 F g −1 and excellent cycle life with capacity retention of about 92% after 500 cycles. This facile design and rational synthesis offers an effective strategy to enhance the electrochemical performance of supercapacitors and shows promising potential for large-scale application in energy storage

  12. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    Science.gov (United States)

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  13. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  14. Thermal performance of a PCB embedded pulsating heat pipe for power electronics applications

    International Nuclear Information System (INIS)

    Kearney, Daniel J.; Suleman, Omar; Griffin, Justin; Mavrakis, Georgios

    2016-01-01

    Highlights: • Planar, compact PCB embedded pulsating heat pipe for heat spreading applications. • Embedded heat pipe operates at sub-ambient pressure with environmentally. • Compatible fluids. • Range of optimum operating conditions, orientations and fill ratios identified. - Abstract: Low voltage power electronics applications (<1.2 kV) are pushing the design envelope towards increased functionality, better reliability, low profile and reduced cost. One packaging method to enable these constraints is the integration of active power electronic devices into the printed circuit board improving electrical and thermal performance. This development requires a reliable passive thermal management solution to mitigate hot spots due to the increased heat flux density. To this end, a 44 channel open looped pulsating heat pipe (OL-PHP) is experimentally investigated for two independent dielectric working fluids – Novec"T"M 649 and Novec"T"M 774 – due to their lower pressure operation and low global warming potential compared to traditional two-phase coolants. The OL-PHP is investigated in vertical (90°) orientation with fill ratios ranging from 0.30 to 0.70. The results highlight the steady state operating conditions for each working fluid with instantaneous plots of pressure, temperature, and thermal resistance; the minimum potential bulk thermal resistance for each fill ratio and the effective thermal conductivity achievable for the OL-PHP.

  15. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  16. Computing Best and Worst Shortcuts of Graphs Embedded in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Luo, Jun

    2008-01-01

    Given a graph embedded in a metric space, its dilation is the maximum over all distinct pairs of vertices of the ratio between their distance in the graph and the metric distance between them. Given such a graph G with n vertices and m edges and consisting of at most two connected components, we ...

  17. A Micro-Computed Tomography Technique to Study the Quality of Fibre Optics Embedded in Composite Materials

    Directory of Open Access Journals (Sweden)

    Gabriele Chiesura

    2015-05-01

    Full Text Available Quality of embedment of optical fibre sensors in carbon fibre-reinforced polymers plays an important role in the resultant properties of the composite, as well as for the correct monitoring of the structure. Therefore, availability of a tool able to check the optical fibre sensor-composite interaction becomes essential. High-resolution 3D X-ray Micro-Computed Tomography, or Micro-CT, is a relatively new non-destructive inspection technique which enables investigations of the internal structure of a sample without actually compromising its integrity. In this work the feasibility of inspecting the position, the orientation and, more generally, the quality of the embedment of an optical fibre sensor in a carbon fibre reinforced laminate at unit cell level have been proven.

  18. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  19. Exercise Performance Measurement with Smartphone Embedded Sensor for Well-Being Management

    Directory of Open Access Journals (Sweden)

    Chung-Tse Liu

    2016-10-01

    Full Text Available Regular physical activity reduces the risk of many diseases and improves physical and mental health. However, physical inactivity is widespread globally. Improving physical activity levels is a global concern in well-being management. Exercise performance measurement systems have the potential to improve physical activity by providing feedback and motivation to users. We propose an exercise performance measurement system for well-being management that is based on the accumulated activity effective index (AAEI and incorporates a smartphone-embedded sensor. The proposed system generates a numeric index that is based on users’ exercise performance: their level of physical activity and number of days spent exercising. The AAEI presents a clear number that can serve as a useful feedback and goal-setting tool. We implemented the exercise performance measurement system by using a smartphone and conducted experiments to assess the feasibility of the system and investigated the user experience. We recruited 17 participants for validating the feasibility of the measurement system and a total of 35 participants for investigating the user experience. The exercise performance measurement system showed an overall precision of 88% in activity level estimation. Users provided positive feedback about their experience with the exercise performance measurement system. The proposed system is feasible and has a positive effective on well-being management.

  20. High-performance zig-zag and meander inductors embedded in ferrite material

    International Nuclear Information System (INIS)

    Stojanovic, Goran; Damnjanovic, Mirjana; Desnica, Vladan; Zivanov, Ljiljana; Raghavendra, Ramesh; Bellew, Pat; Mcloughlin, Neil

    2006-01-01

    This paper describes the design, modeling, simulation and fabrication of zig-zag and meander inductors embedded in low- or high-permeability soft ferrite material. These microinductors have been developed with ceramic coprocessing technology. We compare the electrical properties of zig-zag and meander inductors structures installed as surface-mount devices. The equivalent model of the new structures is presented, suitable for design, circuit simulations and for prediction of the performance of proposed inductors. The relatively high impedance values allow these microinductors to be used in high-frequency suppressors. The components were tested in the frequency range of 1 MHz-3 GHz using an Agilent 4287A RF LCR meter. The measurements confirm the validity of the analytical model

  1. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  2. Selective, Embedded, Just-In-Time Specialization (SEJITS): Portable Parallel Performance from Sequential, Productive, Embedded Domain-Specific Languages

    Science.gov (United States)

    2012-12-01

    approaches to speaker diarization become fast enough to obviate further research in offline approaches. (We have not yet investigated what components of the...Evans, Corinne Fredouille, G Friedland, and O Vinyals. Speaker diarization : A review of recent research. Accepted for publication in ”IEEE Transactions...implementation from [32]. (b) Diarizer application performance as a multiple of real time; “100×” means that 1 second of audio can be processed in 1/100

  3. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  4. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  5. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  6. Assessment without Testing: Using Performance Measures Embedded in a Technology-Based Instructional Program as Indicators of Reading Ability

    Science.gov (United States)

    Mitchell, Alison; Baron, Lauren; Macaruso, Paul

    2018-01-01

    Screening and monitoring student reading progress can be costly and time consuming. Assessment embedded within the context of online instructional programs can capture ongoing student performance data while limiting testing time outside of instruction. This paper presents two studies that examined the validity of using performance measures from a…

  7. Synergistically Enhanced Performance of Ultrathin Nanostructured Silicon Solar Cells Embedded in Plasmonically Assisted, Multispectral Luminescent Waveguides

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung-Min; Dhar, Purnim; Chen, Huandong; Montenegro, Angelo; Liaw, Lauren; Kang, Dongseok; Gai, Boju; Benderskii, Alexander V.; Yoon, Jongseung

    2017-04-12

    Ultrathin silicon solar cells fabricated by anisotropic wet chemical etching of single-crystalline wafer materials represent an attractive materials platform that could provide many advantages for realizing high-performance, low-cost photovoltaics. However, their intrinsically limited photovoltaic performance arising from insufficient absorption of low-energy photons demands careful design of light management to maximize the efficiency and preserve the cost-effectiveness of solar cells. Herein we present an integrated flexible solar module of ultrathin, nanostructured silicon solar cells capable of simultaneously exploiting spectral upconversion and downshifting in conjunction with multispectral luminescent waveguides and a nanostructured plasmonic reflector to compensate for their weak optical absorption and enhance their performance. The 8 μm-thick silicon solar cells incorporating a hexagonally periodic nanostructured surface relief are surface-embedded in layered multispectral luminescent media containing organic dyes and NaYF4:Yb3+,Er3+ nanocrystals as downshifting and upconverting luminophores, respectively, via printing-enabled deterministic materials assembly. The ultrathin nanostructured silicon microcells in the composite luminescent waveguide exhibit strongly augmented photocurrent (~40.1 mA/cm2) and energy conversion efficiency (~12.8%) than devices with only a single type of luminescent species, owing to the synergistic contributions from optical downshifting, plasmonically enhanced upconversion, and waveguided photon flux for optical concentration, where the short-circuit current density increased by ~13.6 mA/cm2 compared with microcells in a nonluminescent medium on a plain silver reflector under a confined illumination.

  8. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  9. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  10. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. Highly conductive porous Na-embedded carbon nanowalls for high-performance capacitive deionization

    Science.gov (United States)

    Chang, Liang; Hu, Yun Hang

    2018-05-01

    Highly conductive porous Na-embedded carbon nanowalls (Na@C), which were recently invented, have exhibited excellent performance for dye-sensitized solar cells and electric double-layer capacitors. In this work, Na@C was demonstrated as an excellent electrode material for capacitive deionization (CDI). In a three-electrode configuration system, the specific capacity of the Na@C electrodes can achieve 306.4 F/g at current density of 0.2 A/g in 1 M NaCl, which is higher than that (235.2 F/g) of activated carbon (AC) electrodes. Furthermore, a high electrosorption capacity of 8.75 mg g-1 in 100 mg/L NaCl was obtained with the Na@C electrodes in a batch-mode capacitive deionization cell. It exceeds the electrosorption capacity (4.08 mg g-1) of AC electrodes. The Na@C electrode also showed a promising cycle stability. The excellent performance of Na@C electrode for capacitive deionization (CDI) can be attributed to its high electrical conductivity and large accessible surface area.

  12. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  13. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  14. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  15. Combining UML2 Application and SystemC Platform Modelling for Performance Evaluation of Real-Time Embedded Systems

    Directory of Open Access Journals (Sweden)

    Qu Yang

    2008-01-01

    Full Text Available Abstract Future mobile devices will be based on heterogeneous multiprocessing platforms accommodating several stand-alone applications. The network-on-chip communication and device networking combine the design challenges of conventional distributed systems and resource constrained real-time embedded systems. Interoperable design space exploration for both the application and platform development is required. Application designer needs abstract platform models to rapidly check the feasibility of a new feature or application. Platform designer needs abstract application models for defining platform computation and communication capacities. We propose a layered UML application/workload and SystemC platform modelling approach that allow application and platform to be modelled at several levels of abstraction, which enables early performance evaluation of the resulting system. The overall approach has been experimented with a mobile video player case study, while different load extraction methods have been validated by applying them to MPEG-4 encoder, Quake2 3D game, and MP3 decoder case studies previously.

  16. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  17. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  18. Solvation Effects on Electronic Transitions: Exploring the Performance of Advanced Solvent Potentials in Polarizable Embedding Calculations

    DEFF Research Database (Denmark)

    Schwabe, Tobias; Olsen, Magnus; Sneskov, Kristian

    2011-01-01

    The polarizable embedding (PE) approach, which combines quantum mechanics (QM) and molecular mechanics (MM), is applied to predict solvatochromic effects on excitation energies of several representative molecules in aqueous, methanol, acetonitrile, and carbon tetrachloride solutions. Good agreement...

  19. Performance evaluation of multi-channel wireless mesh networks with embedded systems.

    Science.gov (United States)

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN).

  20. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  1. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  2. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  3. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  4. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  6. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  7. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  8. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  9. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  10. Cloud Computing for Maintenance Performance Improvement

    OpenAIRE

    Kour, Ravdeep; Karim, Ramin; Parida, Aditya

    2013-01-01

    Cloud Computing is an emerging research area. It can be utilised for acquiring an effective and efficient information logistics. This paper uses cloud-based technology for the establishment of information logistics for railway system which requires information based on data from different data sources (e.g. railway maintenance, railway operation, and railway business data). In order to improve the performance of the maintenance process relevant data from various sources need to be acquired, f...

  11. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  12. Study of interface influence on bending performance of CFRP with embedded optical fibers

    Science.gov (United States)

    Liu, Rong-mei; Liang, Da-kai

    2008-11-01

    Studies showed that the bending strength of composite would be affected by embedded optical fibers. Interface strength between the embedded optical fiber and the matrix was studied in this paper. Based on the single fiber pull out tests, the interfacial shear strength between the coating and the clad is the weakest. The shear strength of the optical fiber used in this study is near to 0.8MPa. In order to study the interfacial effect on bending property of generic smart structure, a quasi-isotropic composite laminates were produced from Toray T300C/ epoxy prepreg. Optical fibers were embedded within different orientation plies of the plates, with the optical fibers embedded in the same direction. Accordingly, five different types of plates were produced. Impact tests were carried out on the 5 different plate types. It is shown that when the fiber was embedded at the upper layer, the bending strength drops mostly. The bending normal stress on material arrives at the maximum. So does the normal stress applied on the optical fiber at the surface. Therefore, destructions could originate at the interface between the coating and the clad foremost. The ultimate strength of the smart structure will be affected furthest.

  13. Reconfiguration of Computation and Communication Resources in Multi-Core Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pezzarossa, Luca

    -core platform. Our approach is to associate reconfiguration with operational mode changes where the system, during normal operation, changes a subset of the executing tasks to adapt its behaviour to new conditions. Reconfiguration is therefore used during a mode change to modify the real-time guaranteed services...... of the communication channels between the tasks that are affected by the reconfiguration. This thesis investigates the use of reconfiguration in the context of multicore realtime systems targeting embedded applications. We address the reconfiguration of both the computation and the communication resources of a multi...... by the communication fabric between the cores of the platform. To support this, we present a new network on chip architecture, named Argo 2, that allows instantaneous and time-predictable reconfiguration of the communication channels. Our reconfiguration-capable architecture is prototyped using the existing time...

  14. Evaluating the Performance of a Novel Embedded Closed-loop System.

    Science.gov (United States)

    Leelarathna, Lalantha; Thabit, Hood; Allen, Janet M; Nodale, Marianna; Wilinska, Malgorzata E; Powell, Kevin; Lane, Stephen; Evans, Mark L; Hovorka, Roman

    2014-03-01

    The objective was to assess the reliability of a novel automated closed-loop glucose control system developed within the AP@home consortium in adults with type 1 diabetes. Eight adults with type 1 diabetes on insulin pump therapy (3 men; ages 40.5 ± 14.3 years; HbA1c 8.2 ± 0.8%) participated in an open-label, single-center, single-arm, 12-hour overnight study performed at the clinical research facility. A standardized evening meal (80 g CHO) accompanied by prandial insulin boluses were given at 19:00 followed by an optional snack of 15 g at 22:00 without insulin bolus. Automated closed-loop glucose control was started at 19:00 and continued until 07:00 the next day. Basal insulin delivery (Accu-Chek Spirit, Roche) was automatically adjusted by Cambridge model predictive control algorithm, running on a purpose-built embedded device, based on real-time continuous glucose monitor readings (Dexcom G4 Platinum). Closed-loop system was operational as intended over 99% of the time. Overnight plasma glucose levels (22:00 to 07:00) were within the target range (3.9 to 8.0 mmol/l) for 75.4% (37.5, 92.9) of the time without any time spent in hypoglycemia (system. The time spent in target glucose level overnight was comparable to results of previously published studies. Further developments to miniaturize the system for home studies are warranted. © 2014 Diabetes Technology Society.

  15. High-performance liquid chromatography separation of unsaturated organic compounds by a monolithic silica column embedded with silver nanoparticles.

    Science.gov (United States)

    Zhu, Yang; Morisato, Kei; Hasegawa, George; Moitra, Nirmalya; Kiyomura, Tsutomu; Kurata, Hiroki; Kanamori, Kazuyoshi; Nakanishi, Kazuki

    2015-08-01

    The optimization of a porous structure to ensure good separation performances is always a significant issue in high-performance liquid chromatography column design. Recently we reported the homogeneous embedment of Ag nanoparticles in periodic mesoporous silica monolith and the application of such Ag nanoparticles embedded silica monolith for the high-performance liquid chromatography separation of polyaromatic hydrocarbons. However, the separation performance remains to be improved and the retention mechanism as compared with the Ag ion high-performance liquid chromatography technique still needs to be clarified. In this research, Ag nanoparticles were introduced into a macro/mesoporous silica monolith with optimized pore parameters for high-performance liquid chromatography separations. Baseline separation of benzene, naphthalene, anthracene, and pyrene was achieved with the theoretical plate number for analyte naphthalene as 36,000 m(-1). Its separation function was further extended to cis/trans isomers of aromatic compounds where cis/trans stilbenes were chosen as a benchmark. Good separation of cis/trans-stilbene with separation factor as 7 and theoretical plate number as 76,000 m(-1) for cis-stilbene was obtained. The trans isomer, however, is retained more strongly, which contradicts the long- established retention rule of Ag ion chromatography. Such behavior of Ag nanoparticles embedded in a silica column can be attributed to the differences in the molecular geometric configuration of cis/trans stilbenes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Computer game-based mathematics education : Embedded faded worked examples facilitate knowledge acquisition

    NARCIS (Netherlands)

    ter Vrugte, Judith; de Jong, Anthonius J.M.; Vandercruysse, Sylke; Wouters, Pieter; van Oostendorp, Herre; Elen, Jan

    This study addresses the added value of faded worked examples in a computer game-based learning environment. The faded worked examples were introduced to encourage active selection and processing of domain content in the game. The content of the game was proportional reasoning and participants were

  17. Building Professionalism and Employability Skills: Embedding Employer Engagement within First-Year Computing Modules

    Science.gov (United States)

    Hanna, Philip; Allen, Angela; Kane, Russell; Anderson, Neil; McGowan, Aidan; Collins, Matthew; Hutchison, Malcolm

    2015-01-01

    This paper outlines a means of improving the employability skills of first-year university students through a closely integrated model of employer engagement within computer science modules. The outlined approach illustrates how employability skills, including communication, teamwork and time management skills, can be contextualised in a manner…

  18. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  19. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  20. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  1. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  2. Computer fan performance enhancement via acoustic perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Greenblatt, David, E-mail: davidg@technion.ac.il [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Avraham, Tzahi; Golan, Maayan [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Computer fan effectiveness was increased by introducing acoustic perturbations. Black-Right-Pointing-Pointer Acoustic perturbations controlled blade boundary layer separation. Black-Right-Pointing-Pointer Optimum frequencies corresponded with airfoils studies. Black-Right-Pointing-Pointer Exploitation of flow instabilities was responsible for performance improvements. Black-Right-Pointing-Pointer Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin-Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  3. Computer fan performance enhancement via acoustic perturbations

    International Nuclear Information System (INIS)

    Greenblatt, David; Avraham, Tzahi; Golan, Maayan

    2012-01-01

    Highlights: ► Computer fan effectiveness was increased by introducing acoustic perturbations. ► Acoustic perturbations controlled blade boundary layer separation. ► Optimum frequencies corresponded with airfoils studies. ► Exploitation of flow instabilities was responsible for performance improvements. ► Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin–Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  4. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  5. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  6. Employing Inquiry-Based Computer Simulations and Embedded Scientist Videos To Teach Challenging Climate Change and Nature of Science Concepts

    Science.gov (United States)

    Cohen, E.

    2013-12-01

    Design based research was utilized to investigate how students use a greenhouse effect simulation in order to derive best learning practices. During this process, students recognized the authentic scientific process involving computer simulations. The simulation used is embedded within an inquiry-based technology-mediated science curriculum known as Web-based Inquiry Science Environment (WISE). For this research, students from a suburban, diverse, middle school setting use the simulations as part of a two week-long class unit on climate change. A pilot study was conducted during phase one of the research that informed phase two, which encompasses the dissertation. During the pilot study, as students worked through the simulation, evidence of shifts in student motivation, understanding of science content, and ideas about the nature of science became present using a combination of student interviews, focus groups, and students' conversations. Outcomes of the pilot study included improvements to the pedagogical approach. Allowing students to do 'Extreme Testing' (e.g., making the world as hot or cold as possible) and increasing the time for free exploration of the simulation are improvements made as a result of the findings of the pilot study. In the dissertation (phase two of the research design) these findings were implemented in a new curriculum scaled for 85 new students from the same school during the next school year. The modifications included new components implementing simulations as an assessment tool for all students and embedded modeling tools. All students were asked to build pre and post models, however due to technological constraints these were not an effective tool. A non-video group of 44 students was established and another group of 41 video students had a WISE curriculum which included twelve minutes of scientists' conversational videos referencing explicit aspects on the nature of science, specifically the use of models and simulations in science

  7. The Role of Mythical Form of Thinking Embedded into Earlier and Recent Computer Games in Teaching Religious and Ethical Values

    Directory of Open Access Journals (Sweden)

    Hülya ALTUNYA

    2017-08-01

    Full Text Available In the world of eastern thought, narration of exemplary events is common through methods of mythical expression such as fairy tales, stories, myths etc. In these discourses, it is aimed at giving moral advice and thus educating the audience. It is stated in these discourses in which surreal events are told people who stand up against distress and difficulties retain their happiness. In such narrative texts, on one hand hope is being kept alive; on the other hand, such messages as patience, clinging to honesty under harsh times and possessing stamina are conveyed to the audience. Likewise, today, surreal events; namely, mythical fictions are animated in cartoons, animation and computer games. In these games, it is told that the most brutal battles are carried out through the most merciless weapons and only the most powerful one wins. These games, causing youth to grow up as aggressive, merciless and likely to try each way to seize power, should be examined in terms of education as well as that way of thinking. In this article, the form of thinking in the stories in which surreal events in the past were narrated is compared in terms of ethical values with the form of thinking in recent computer games in which surreal events are embedded.

  8. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  9. Graphene-Embedded Co3O4 Rose-Spheres for Enhanced Performance in Lithium Ion Batteries.

    Science.gov (United States)

    Jing, Mingjun; Zhou, Minjie; Li, Gangyong; Chen, Zhengu; Xu, Wenyuan; Chen, Xiaobo; Hou, Zhaohui

    2017-03-22

    Co 3 O 4 has been widely studied as a promising candidate as an anode material for lithium ion batteries. However, the huge volume change and structural strain associated with the Li + insertion and extraction process leads to the pulverization and deterioration of the electrode, resulting in a poor performance in lithium ion batteries. In this paper, Co 3 O 4 rose-spheres obtained via hydrothermal technique are successfully embedded in graphene through an electrostatic self-assembly process. Graphene-embedded Co 3 O 4 rose-spheres (G-Co 3 O 4 ) show a high reversible capacity, a good cyclic performance, and an excellent rate capability, e.g., a stable capacity of 1110.8 mAh g -1 at 90 mA g -1 (0.1 C), and a reversible capacity of 462.3 mAh g -1 at 1800 mA g -1 (2 C), benefitted from the novel architecture of graphene-embedded Co 3 O 4 rose-spheres. This work has demonstrated a feasible strategy to improve the performance of Co 3 O 4 for lithium-ion battery application.

  10. Shock Analysis Method for Systematic Performance Evaluation of Component Embedded in Handheld Electronic Devices

    Directory of Open Access Journals (Sweden)

    C.S. Chin

    2006-01-01

    Full Text Available It is important to identify the robustness of product (or embedded component inside the product against shock due to free drop. With the increasing mobile and fast-paced lifestyle of the average consumer, much is required of the products; such as consumers expect mobile products to continue to operate after drop impact. Since free drop test is commonly used to evaluate the robustness of small component embedded in MP3 player, it is difficult to produce a repeatable shock reading due to highly uncontrolled orientation during the impact on ground. Hence attention has been focus on shock table testing, which produces a higher repeatable result. But it failed to demonstrate the actual shock with the presence of rotational movement due to free drop and also it suffers from a similar limitation of repeatability. From drop to drop, shock tables can vary about ± 5% in velocity change but suitable for making a consistent tracking the product improvement.

  11. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  12. Characterization of a Reconfigurable Free-Space Optical Channel for Embedded Computer Applications with Experimental Validation Using Rapid Prototyping Technology

    Directory of Open Access Journals (Sweden)

    Rafael Gil-Otero

    2007-02-01

    Full Text Available Free-space optical interconnects (FSOIs are widely seen as a potential solution to current and future bandwidth bottlenecks for parallel processors. In this paper, an FSOI system called optical highway (OH is proposed. The OH uses polarizing beam splitter-liquid crystal plate (PBS/LC assemblies to perform reconfigurable beam combination functions. The properties of the OH make it suitable for embedding complex network topologies such as completed connected mesh or hypercube. This paper proposes the use of rapid prototyping technology for implementing an optomechanical system suitable for studying the reconfigurable characteristics of a free-space optical channel. Additionally, it reports how the limited contrast ratio of the optical components can affect the attenuation of the optical signal and the crosstalk caused by misdirected signals. Different techniques are also proposed in order to increase the optical modulation amplitude (OMA of the system.

  13. Characterization of a Reconfigurable Free-Space Optical Channel for Embedded Computer Applications with Experimental Validation Using Rapid Prototyping Technology

    Directory of Open Access Journals (Sweden)

    Lim Theodore

    2007-01-01

    Full Text Available Free-space optical interconnects (FSOIs are widely seen as a potential solution to current and future bandwidth bottlenecks for parallel processors. In this paper, an FSOI system called optical highway (OH is proposed. The OH uses polarizing beam splitter-liquid crystal plate (PBS/LC assemblies to perform reconfigurable beam combination functions. The properties of the OH make it suitable for embedding complex network topologies such as completed connected mesh or hypercube. This paper proposes the use of rapid prototyping technology for implementing an optomechanical system suitable for studying the reconfigurable characteristics of a free-space optical channel. Additionally, it reports how the limited contrast ratio of the optical components can affect the attenuation of the optical signal and the crosstalk caused by misdirected signals. Different techniques are also proposed in order to increase the optical modulation amplitude (OMA of the system.

  14. Achieving Performance Speed-up in FPGA Based Bit-Parallel Multipliers using Embedded Primitive and Macro support

    Directory of Open Access Journals (Sweden)

    Burhan Khurshid

    2015-05-01

    Full Text Available Modern Field Programmable Gate Arrays (FPGA are fast moving into the consumer market and their domain has expanded from prototype designing to low and medium volume productions. FPGAs are proving to be an attractive replacement for Application Specific Integrated Circuits (ASIC primarily because of the low Non-recurring Engineering (NRE costs associated with FPGA platforms. This has prompted FPGA vendors to improve the capacity and flexibility of the underlying primitive fabric and include specialized macro support and intellectual property (IP cores in their offerings. However, most of the work related to FPGA implementations does not take full advantage of these offerings. This is primarily because designers rely mainly on the technology-independent optimization to enhance the performance of the system and completely neglect the speed-up that is achievable using these embedded primitives and macro support. In this paper, we consider the technology-dependent optimization of fixed-point bit-parallel multipliers by carrying out their implementations using embedded primitives and macro support that are inherent in modern day FPGAs. Our implementation targets three different FPGA families viz. Spartan-6, Virtex-4 and Virtex-5. The implementation results indicate that a considerable speed up in performance is achievable using these embedded FPGA resources.

  15. The Future of Software Engineering for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pope, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-16

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts of the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.

  16. A high-performance, flexible and robust metal nanotrough-embedded transparent conducting film for wearable touch screen panels

    Science.gov (United States)

    Im, Hyeon-Gyun; An, Byeong Wan; Jin, Jungho; Jang, Junho; Park, Young-Geun; Park, Jang-Ung; Bae, Byeong-Soo

    2016-02-01

    We report a high-performance, flexible and robust metal nanotrough-embedded transparent conducting hybrid film (metal nanotrough-GFRHybrimer). Using an electro-spun polymer nanofiber web as a template and vacuum-deposited gold as a conductor, a junction resistance-free continuous metal nanotrough network is formed. Subsequently, the metal nanotrough is embedded on the surface of a glass-fabric reinforced composite substrate (GFRHybrimer). The monolithic composite structure of our transparent conducting film allows simultaneously high thermal stability (24 h at 250 °C in air), a smooth surface topography (Rrms touch screen panel (TSP) is fabricated using the transparent conducting films. The flexible TSP device stably operates on the back of a human hand and on a wristband.We report a high-performance, flexible and robust metal nanotrough-embedded transparent conducting hybrid film (metal nanotrough-GFRHybrimer). Using an electro-spun polymer nanofiber web as a template and vacuum-deposited gold as a conductor, a junction resistance-free continuous metal nanotrough network is formed. Subsequently, the metal nanotrough is embedded on the surface of a glass-fabric reinforced composite substrate (GFRHybrimer). The monolithic composite structure of our transparent conducting film allows simultaneously high thermal stability (24 h at 250 °C in air), a smooth surface topography (Rrms touch screen panel (TSP) is fabricated using the transparent conducting films. The flexible TSP device stably operates on the back of a human hand and on a wristband. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07657a

  17. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  19. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  20. Embedded Leverage

    DEFF Research Database (Denmark)

    Frazzini, Andrea; Heje Pedersen, Lasse

    find that asset classes with embedded leverage offer low risk-adjusted returns and, in the cross-section, higher embedded leverage is associated with lower returns. A portfolio which is long low-embedded-leverage securities and short high-embedded-leverage securities earns large abnormal returns...

  1. Hybrid brain-computer interface for biomedical cyber-physical system application using wireless embedded EEG systems.

    Science.gov (United States)

    Chai, Rifai; Naik, Ganesh R; Ling, Sai Ho; Nguyen, Hung T

    2017-01-07

    One of the key challenges of the biomedical cyber-physical system is to combine cognitive neuroscience with the integration of physical systems to assist people with disabilities. Electroencephalography (EEG) has been explored as a non-invasive method of providing assistive technology by using brain electrical signals. This paper presents a unique prototype of a hybrid brain computer interface (BCI) which senses a combination classification of mental task, steady state visual evoked potential (SSVEP) and eyes closed detection using only two EEG channels. In addition, a microcontroller based head-mounted battery-operated wireless EEG sensor combined with a separate embedded system is used to enhance portability, convenience and cost effectiveness. This experiment has been conducted with five healthy participants and five patients with tetraplegia. Generally, the results show comparable classification accuracies between healthy subjects and tetraplegia patients. For the offline artificial neural network classification for the target group of patients with tetraplegia, the hybrid BCI system combines three mental tasks, three SSVEP frequencies and eyes closed, with average classification accuracy at 74% and average information transfer rate (ITR) of the system of 27 bits/min. For the real-time testing of the intentional signal on patients with tetraplegia, the average success rate of detection is 70% and the speed of detection varies from 2 to 4 s.

  2. Automatically produced FRP beams with embedded FOS in complex geometry: process, material compatibility, micromechanical analysis, and performance tests

    Science.gov (United States)

    Gabler, Markus; Tkachenko, Viktoriya; Küppers, Simon; Kuka, Georg G.; Habel, Wolfgang R.; Milwich, Markus; Knippers, Jan

    2012-04-01

    The main goal of the presented work was to evolve a multifunctional beam composed out of fiber reinforced plastics (FRP) and an embedded optical fiber with various fiber Bragg grating sensors (FBG). These beams are developed for the use as structural member for bridges or industrial applications. It is now possible to realize large scale cross sections, the embedding is part of a fully automated process and jumpers can be omitted in order to not negatively influence the laminate. The development includes the smart placement and layout of the optical fibers in the cross section, reliable strain transfer, and finally the coupling of the embedded fibers after production. Micromechanical tests and analysis were carried out to evaluate the performance of the sensor. The work was funded by the German ministry of economics and technology (funding scheme ZIM). Next to the authors of this contribution, Melanie Book with Röchling Engineering Plastics KG (Haren/Germany; Katharina Frey with SAERTEX GmbH & Co. KG (Saerbeck/Germany) were part of the research group.

  3. Accuracy & Computational Considerations for Wide--Angle One--way Seismic Propagators and Multiple Scattering by Invariant Embedding

    Science.gov (United States)

    Thomson, C. J.

    2004-12-01

    of computation time differences. The ideas described extend to the three--dimensional, generally anisotropic case and to multiple scattering by invariant embedding.

  4. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  5. Final implementation, commissioning, and performance of embedded collimator beam position monitors in the Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    Gianluca Valentino

    2017-08-01

    Full Text Available During Long Shutdown 1, 18 Large Hadron Collider (LHC collimators were replaced with a new design, in which beam position monitor (BPM pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β^{*} and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy.

  6. Final implementation, commissioning, and performance of embedded collimator beam position monitors in the Large Hadron Collider

    Science.gov (United States)

    Valentino, Gianluca; Baud, Guillaume; Bruce, Roderik; Gasior, Marek; Mereghetti, Alessio; Mirarchi, Daniele; Olexa, Jakub; Redaelli, Stefano; Salvachua, Belen; Valloni, Alessandra; Wenninger, Jorg

    2017-08-01

    During Long Shutdown 1, 18 Large Hadron Collider (LHC) collimators were replaced with a new design, in which beam position monitor (BPM) pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β* and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy.

  7. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  8. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  9. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  10. Embedded engineering education

    CERN Document Server

    Kaštelan, Ivan; Temerinac, Miodrag; Barak, Moshe; Sruk, Vlado

    2016-01-01

    This book focuses on the outcome of the European research project “FP7-ICT-2011-8 / 317882: Embedded Engineering Learning Platform” E2LP. Additionally, some experiences and researches outside this project have been included. This book provides information about the achieved results of the E2LP project as well as some broader views about the embedded engineering education. It captures project results and applications, methodologies, and evaluations. It leads to the history of computer architectures, brings a touch of the future in education tools and provides a valuable resource for anyone interested in embedded engineering education concepts, experiences and material. The book contents 12 original contributions and will open a broader discussion about the necessary knowledge and appropriate learning methods for the new profile of embedded engineers. As a result, the proposed Embedded Computer Engineering Learning Platform will help to educate a sufficient number of future engineers in Europe, capable of d...

  11. Monitoring of prestressed concrete pressure vessels. II. performance of selected concrete embedment strain meters under normal and extreme environmental conditions

    International Nuclear Information System (INIS)

    Naus, D.J.; Hurtt, C.C.

    1978-10-01

    Unique types of instrumentation are used in prestressed concrete pressure vessels (PCPVs) to measure strains, stresses, deflections, prestressing forces, moisture content, temperatures, and possibly cracking. Their primary purpose is to monitor these complex structures throughout their 20- to 30-year operating lifetime in order to provide continuing assurance of their reliability and safety. Numerous concrete embedment instrumentation systems are available commercially. Since this instrumentation is important in providing continuing assurance of satisfactory performance of PCPVs, the information provided must be reliable. Therefore, laboratory studies were conducted to evaluate the reliability of these commercially available instrumentation systems. This report, the second in a series related to instrumentation embedded in concrete, presents performance-reliability data for 13 types of selected concrete embedment strain meters which were subjected to a variety of loading environments, including unloaded, thermally loaded, simulated PCPV, and extreme environments. Although only a limited number of meters of each type were tested in any one test series, the composite results of the investigation indicate that the majority of these meters would not be able to provide reliable data throughout the 20- to 30-year anticipated operating life of a PCPV. Specific conclusions drawn from the study are: (1) Improved corrosion-resistant materials and sealing techniques should be developed for meters that are to be used in PCPV environments. (2) There is a need for the development of meters that are capable of surviving in concretes where temperatures in excess of 66 0 C are present for extended periods of time. (3) Research should be conducted on other measurement techniques, such as inductance, capacitance, and fluidics

  12. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  13. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  14. Optomechanical performance of 3D-printed mirrors with embedded cooling channels and substructures

    Science.gov (United States)

    Mici, Joni; Rothenberg, Bradley; Brisson, Erik; Wicks, Sunny; Stubbs, David M.

    2015-09-01

    Advances in 3D printing technology allow for the manufacture of topologically complex parts not otherwise feasible through conventional manufacturing methods. Maturing metal and ceramic 3D printing technologies are becoming more adept at printing complex shapes, enabling topologically intricate mirror substrates. One application area that can benefit from additive manufacturing is reflective optics used in high energy laser (HEL) systems that require materials with a low coefficient of thermal expansion (CTE), high specific stiffness, and (most importantly) high thermal conductivity to effectively dissipate heat from the optical surface. Currently, the limits of conventional manufacturing dictate the topology of HEL optics to be monolithic structures that rely on passive cooling mechanisms and high reflectivity coatings to withstand laser damage. 3D printing enables the manufacture of embedded cooling channels in metallic mirror substrates to allow for (1) active cooling and (2) tunable structures. This paper describes the engineering and analysis of an actively cooled composite optical structure to demonstrate the potential of 3D printing on the improvement of optomechanical systems.

  15. Optimization of tribological performance of SiC embedded composite coating via Taguchi analysis approach

    Science.gov (United States)

    Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.

    2017-03-01

    Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on Taguchi approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical Taguchi approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.

  16. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    Science.gov (United States)

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  17. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  18. Nano-Sn embedded in expanded graphite as anode for lithium ion batteries with improved low temperature electrochemical performance

    International Nuclear Information System (INIS)

    Yan, Yong; Ben, Liubin; Zhan, Yuanjie; Huang, Xuejie

    2016-01-01

    Highlights: • Nano-Sn embedded in interlayers of expanded graphite is fabricated. • The graphene/nano-Sn/graphene stacked structure promotes cycling stability of Sn. • The Sn/EG shows improved low temperature electrochemical performance. • Chemical diffusion coefficients of the Sn/EG are obtained by GITT. • The Sn/EG exhibits faster Li-ion intercalation kinetics than graphite. - Abstract: Metallic tin (Sn) used as anode material for lithium ion batteries has long been proposed, but its low temperature electrochemical performance has been rarely concerned. Here, a Sn/C composite with nano-Sn embedded in expanded graphite (Sn/EG) is synthesized. The nano-Sn particles (∼30 nm) are uniformly distributed in the interlayers of expanded graphite forming a tightly stacked layered structure. The electrochemical performance of the Sn/EG, particularly at low temperature, is carefully investigated compared with graphite. At -20 °C, the Sn/EG shows capacities of 200 mAh g −1 at 0.1C and 130 mAh g −1 at 0.2C, which is much superior to graphite (<10 mAh g −1 ). EIS measurements suggest that the charge transfer impedance of the Sn/EG increases less rapidly than graphite with decreasing temperatures, which is responsible for the improved low temperature electrochemical performance. The Li-ion chemical diffusion coefficients of the Sn/EG obtained by GITT are an order of magnitude higher at room temperature than that at -20 °C. Furthermore, the Sn/EG exhibits faster Li-ion intercalation kinetics than graphite in the asymmetric charge/discharge measurements, which shows great promise for the application in electric vehicles charged at low temperature.

  19. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  20. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  1. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  2. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  3. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study.

    Science.gov (United States)

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien; Hwang, Juey-Jen; Ho, Yi-Lwun

    2017-09-26

    Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician's ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. ©Ying-Hsien Chen, Chi-Sheng Hung, Ching-Chang Huang, Yu-Chien Hung, Juey-Jen Hwang, Yi-Lwun Ho. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 26.09.2017.

  4. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study

    Science.gov (United States)

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien

    2017-01-01

    Background Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. Objective The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. Methods We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Results Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician’s ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. Conclusions AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. PMID:28951384

  5. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  6. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  7. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    Science.gov (United States)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  8. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  9. Use of the color trails test as an embedded measure of performance validity.

    Science.gov (United States)

    Henry, George K; Algina, James

    2013-01-01

    One hundred personal injury litigants and disability claimants referred for a forensic neuropsychological evaluation were administered both portions of the Color Trails Test (CTT) as part of a more comprehensive battery of standardized tests. Subjects who failed two or more free-standing tests of cognitive performance validity formed the Failed Performance Validity (FPV) group, while subjects who passed all free-standing performance validity measures were assigned to the Passed Performance Validity (PPV) group. A cutscore of ≥45 seconds to complete Color Trails 1 (CT1) was associated with a classification accuracy of 78%, good sensitivity (66%) and high specificity (90%), while a cutscore of ≥84 seconds to complete Color Trails 2 (CT2) was associated with a classification accuracy of 82%, good sensitivity (74%) and high specificity (90%). A CT1 cutscore of ≥58 seconds, and a CT2 cutscore ≥100 seconds was associated with 100% positive predictive power at base rates from 20 to 50%.

  10. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  11. Performing quantum computing experiments in the cloud

    Science.gov (United States)

    Devitt, Simon J.

    2016-09-01

    Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.

  12. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV)

    OpenAIRE

    Bouman, Zita; Hendriks, Marc PH; Schmand, Ben A; Kessels, Roy PC; Aldenkamp, Albert

    2016-01-01

    Introduction. Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design. Method. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals...

  13. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV)

    OpenAIRE

    Bouman, Zita; Hendriks, Marc P.H.; Schmand, Ben A.; Kessels, Roy P.C.; Aldenkamp, Albert P.

    2016-01-01

    Introduction. Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design.Method. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who ...

  14. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV).

    Science.gov (United States)

    Bouman, Zita; Hendriks, Marc P H; Schmand, Ben A; Kessels, Roy P C; Aldenkamp, Albert P

    2016-01-01

    Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who were asked to simulate cognitive impairment as a result of a traumatic brain injury; the last group consisted of 50 healthy controls who were instructed to put forth full effort. Experimental malingerers performed significantly lower on all WMS-IV-NL tasks than did the patients and healthy controls. A binary logistic regression analysis was performed on the experimental malingerers and the patients. The first model contained the visual working memory subtests (Spatial Addition and Symbol Span) and the recognition tasks of the following subtests: Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction. The results showed an overall classification rate of 78.4%, and only Spatial Addition explained a significant amount of variation (p < .001). Subsequent logistic regression analysis and receiver operating characteristic (ROC) analysis supported the discriminatory power of the subtest Spatial Addition. A scaled score cutoff of <4 produced 93% specificity and 52% sensitivity for detection of suboptimal performance. The WMS-IV-NL Spatial Addition subtest may provide clinically useful information for the detection of suboptimal performance.

  15. The data embedding method

    Energy Technology Data Exchange (ETDEWEB)

    Sandford, M.T. II; Bradley, J.N.; Handel, T.G.

    1996-06-01

    Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.

  16. Hilbert-Twin – A Novel Hilbert Transform-Based Method To Compute Envelope Of Free Decaying Oscillations Embedded In Noise, And The Logarithmic Decrement In High-Resolution Mechanical Spectroscopy HRMS

    Directory of Open Access Journals (Sweden)

    Magalas L.B.

    2015-06-01

    Full Text Available In this work, we present a novel Hilbert-twin method to compute an envelope and the logarithmic decrement, δ, from exponentially damped time-invariant harmonic strain signals embedded in noise. The results obtained from five computing methods: (1 the parametric OMI (Optimization in Multiple Intervals method, two interpolated discrete Fourier transform-based (IpDFT methods: (2 the Yoshida-Magalas (YM method and (3 the classic Yoshida (Y method, (4 the novel Hilbert-twin (H-twin method based on the Hilbert transform, and (5 the conventional Hilbert transform (HT method are analyzed and compared. The fundamental feature of the Hilbert-twin method is the efficient elimination of intrinsic asymmetrical oscillations of the envelope, aHT (t, obtained from the discrete Hilbert transform of analyzed signals. Excellent performance in estimation of the logarithmic decrement from the Hilbert-twin method is comparable to that of the OMI and YM for the low- and high-damping levels. The Hilbert-twin method proved to be robust and effective in computing the logarithmic decrement and the resonant frequency of exponentially damped free decaying signals embedded in experimental noise. The Hilbert-twin method is also appropriate to detect nonlinearities in mechanical loss measurements of metals and alloys.

  17. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  18. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  19. Towards OpenVL: Improving Real-Time Performance of Computer Vision Applications

    Science.gov (United States)

    Shen, Changsong; Little, James J.; Fels, Sidney

    Meeting constraints for real-time performance is a main issue for computer vision, especially for embedded computer vision systems. This chapter presents our progress on our open vision library (OpenVL), a novel software architecture to address efficiency through facilitating hardware acceleration, reusability, and scalability for computer vision systems. A logical image understanding pipeline is introduced to allow parallel processing. We also discuss progress on our middleware—vision library utility toolkit (VLUT)—that enables applications to operate transparently over a heterogeneous collection of hardware implementations. OpenVL works as a state machine,with an event-driven mechanismto provide users with application-level interaction. Various explicit or implicit synchronization and communication methods are supported among distributed processes in the logical pipelines. The intent of OpenVL is to allow users to quickly and easily recover useful information from multiple scenes, in a cross-platform, cross-language manner across various software environments and hardware platforms. To validate the critical underlying concepts of OpenVL, a human tracking system and a local positioning system are implemented and described. The novel architecture separates the specification of algorithmic details from the underlying implementation, allowing for different components to be implemented on an embedded system without recompiling code.

  20. Numerical simulation of a hovering rotor using embedded grids

    Science.gov (United States)

    Duque, Earl-Peter N.; Srinivasan, Ganapathi R.

    1992-01-01

    The flow field for a rotor blade in hover was computed by numerically solving the compressible thin-layer Navier-Stokes equations on embedded grids. In this work, three embedded grids were used to discretize the flow field - one for the rotor blade and two to convect the rotor wake. The computations were performed at two hovering test conditions, for a two-bladed rectangular rotor of aspect ratio six. The results compare fairly with experiment and illustrates the use of embedded grids in solving helicopter type flow fields.

  1. Performance of adsorbent-embedded heat exchangers using binder-coating method

    KAUST Repository

    Li, Ang; Thu, Kyaw; Ismail, Azhar Bin; Shahzad, Muhammad Wakil; Ng, Kim Choon

    2016-01-01

    The performance of adsorption (AD) chillers or desalination cycles is dictated by the rates of heat and mass transfer of adsorbate in adsorbent-packed beds. Conventional granular-adsorbent, packed in fin-tube heat exchangers, suffered from poor heat

  2. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV)

    NARCIS (Netherlands)

    Bouman, Z.; Hendriks, M.P.H.; Schmand, B.A.; Kessels, R.P.C.; Aldenkamp, A.P.

    2016-01-01

    INTRODUCTION: Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the

  3. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV)

    NARCIS (Netherlands)

    Bouman, Zita; Hendriks, Marc P. H.; Schmand, Ben A.; Kessels, Roy P. C.; Aldenkamp, Albert P.

    2016-01-01

    Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of

  4. Indicators of suboptimal performance embedded in the Wechsler Memory Scale : Fourth Edition (WMS-IV)

    NARCIS (Netherlands)

    Bouman, Z.; Hendriks, M.P.H.; Schmand, B.A.; Kessels, R.P.C.; Aldenkamp, A.P.

    2016-01-01

    INTRODUCTION: Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the

  5. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV)

    NARCIS (Netherlands)

    Bouman, Zita; Hendriks, Marc P.H.; Schmand, Ben A.; Kessels, Roy P.C.; Aldenkamp, Albert P.

    2016-01-01

    Introduction. Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the

  6. Towards cycle-accurate performance predictions for real-time embedded systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.; Arabnia, H.R.; Deligiannidis, L.; Jandieri, G.

    2013-01-01

    In this paper we present a model-based performance analysis method for component-based real-time systems, featuring cycle-accurate predictions of latencies and enhanced system robustness. The method incorporates the following phases: (a) instruction-level profiling of SW components, (b) modeling the

  7. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  8. A new imidazolium-embedded C{sub 18} stationary phase with enhanced performance in reversed-phase liquid chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Qiu Hongdeng [Department of Applied Chemistry and Biochemistry, Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Lanzhou Institute of Chemical Physics, Chinese Academy of Science, Lanzhou 730000 (China); Mallik, Abul K. [Department of Applied Chemistry and Biochemistry, Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Takafuji, Makoto [Department of Applied Chemistry and Biochemistry, Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Kumamoto Institute for Photo-Electro Organics (Phoenics), Kumamoto 862-0901 (Japan); Liu Xia; Jiang Shengxiang [Lanzhou Institute of Chemical Physics, Chinese Academy of Science, Lanzhou 730000 (China); Ihara, Hirotaka, E-mail: ihara@kumamoto-u.ac.jp [Department of Applied Chemistry and Biochemistry, Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Kumamoto Institute for Photo-Electro Organics (Phoenics), Kumamoto 862-0901 (Japan)

    2012-08-13

    Highlights: Black-Right-Pointing-Pointer Imidazolium-embedded C{sub 18} stationary phase was prepared and characterized. Black-Right-Pointing-Pointer Enhanced chromatographic selectivity was observed in SiImC{sub 18} column. Black-Right-Pointing-Pointer Seven nucleosides and bases were separated using only water as eluent within 8 min. Black-Right-Pointing-Pointer Multiple-interactions induced by embedded polar imidazolium was investigated. - Abstract: In this paper, a new imidazolium-embedded C{sub 18} stationary phase (SiImC{sub 18}) for reversed-phase high-performance liquid chromatography is described. 1-Allyl-3-octadecylimidazolium bromide ionic liquid compound having a long alkyl chain and reactive groups was newly prepared and grafted onto 3-mercaptopropyltrimethoxysilane-modified silica via a surface-initiated radical-chain transfer addition reaction. The SiImC{sub 18} obtained was characterized by elemental analysis, infrared spectroscopy, thermogravimetric analysis, diffuse reflectance infrared Fourier transform, and solid-state {sup 13}C and {sup 29}Si cross-polarization/magic angle spinning nuclear magnetic resonance spectroscopy. The selectivity toward polycyclic aromatic hydrocarbons relative to that toward alkylbenzenes exhibited by SiImC{sub 18} was higher than the corresponding selectivity exhibited by a conventional octadecyl silica (ODS) column, which could be explained by electrostatic {pi}-{pi} interaction cationic imidazolium and electron-rich aromatic rings. On the other hand, SiImC{sub 18} also showed high selectivity for polar compounds, which was based on the multiple interaction and retention mechanisms of this phase with different analytes. 1,6-Dinitropyrene and 1,8-dinitropyrene, which form a positional isomer pair of dipolar compounds, were separated successfully with the SiImC{sub 18} phase. Seven nucleosides and bases (i.e. cytidine, uracil, uridine, thymine, guanosine, xanthosine, and adenosine) were separated using only water as

  9. Amorphous Red Phosphorus Embedded in Sandwiched Porous Carbon Enabling Superior Sodium Storage Performances.

    Science.gov (United States)

    Wu, Ying; Liu, Zheng; Zhong, Xiongwu; Cheng, Xiaolong; Fan, Zhuangjun; Yu, Yan

    2018-03-01

    The red P anode for sodium ion batteries has attracted great attention recently due to the high theoretical capacity, but the poor intrinsic electronic conductivity and large volume expansion restrain its widespread applications. Herein, the red P is successfully encapsulated into the cube shaped sandwich-like interconnected porous carbon building (denoted as P@C-GO/MOF-5) via the vaporization-condensation method. Superior cycling stability (high capacity retention of about 93% at 2 A g -1 after 100 cycles) and excellent rate performance (502 mAh g -1 at 10 A g -1 ) can be obtained for the P@C-GO/MOF-5 electrode. The superior electrochemical performance can be ascribed to the successful incorporation of red P into the unique carbon matrix with large surface area and pore volume, interconnected porous structure, excellent electronic conductivity and superior structural stability. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Experimental Study on the Seismic Performance of Recycled Concrete Brick Walls Embedded with Vertical Reinforcement

    Science.gov (United States)

    Cao, Wanlin; Zhang, Yongbo; Dong, Hongying; Zhou, Zhongyi; Qiao, Qiyun

    2014-01-01

    Recycled concrete brick (RCB) is manufactured by recycled aggregate processed from discarded concrete blocks arising from the demolishing of existing buildings. This paper presents research on the seismic performance of RCB masonry walls to assess the applicability of RCB for use in rural low-rise constructions. The seismic performance of a masonry wall is closely related to the vertical load applied to the wall. Thus, the compressive performance of RCB masonry was investigated firstly by constructing and testing eighteen RCB masonry compressive specimens with different mortar strengths. The load-bearing capacity, deformation and failure characteristic were analyzed, as well. Then, a quasi-static test was carried out to study the seismic behavior of RCB walls by eight RCB masonry walls subjected to an axial compressive load and a reversed cyclic lateral load. Based on the test results, equations for predicting the compressive strength of RCB masonry and the lateral ultimate strength of an RCB masonry wall were proposed. Experimental values were found to be in good agreement with the predicted values. Meanwhile, finite element analysis (FEA) and parametric analysis of the RCB walls were carried out using ABAQUS software. The elastic-plastic deformation characteristics and the lateral load-displacement relations were studied. PMID:28788170

  11. Experimental Study on the Seismic Performance of Recycled Concrete Brick Walls Embedded with Vertical Reinforcement.

    Science.gov (United States)

    Cao, Wanlin; Zhang, Yongbo; Dong, Hongying; Zhou, Zhongyi; Qiao, Qiyun

    2014-08-19

    Recycled concrete brick (RCB) is manufactured by recycled aggregate processed from discarded concrete blocks arising from the demolishing of existing buildings. This paper presents research on the seismic performance of RCB masonry walls to assess the applicability of RCB for use in rural low-rise constructions. The seismic performance of a masonry wall is closely related to the vertical load applied to the wall. Thus, the compressive performance of RCB masonry was investigated firstly by constructing and testing eighteen RCB masonry compressive specimens with different mortar strengths. The load-bearing capacity, deformation and failure characteristic were analyzed, as well. Then, a quasi-static test was carried out to study the seismic behavior of RCB walls by eight RCB masonry walls subjected to an axial compressive load and a reversed cyclic lateral load. Based on the test results, equations for predicting the compressive strength of RCB masonry and the lateral ultimate strength of an RCB masonry wall were proposed. Experimental values were found to be in good agreement with the predicted values. Meanwhile, finite element analysis (FEA) and parametric analysis of the RCB walls were carried out using ABAQUS software. The elastic-plastic deformation characteristics and the lateral load-displacement relations were studied.

  12. Experimental Study on the Seismic Performance of Recycled Concrete Brick Walls Embedded with Vertical Reinforcement

    Directory of Open Access Journals (Sweden)

    Wanlin Cao

    2014-08-01

    Full Text Available Recycled concrete brick (RCB is manufactured by recycled aggregate processed from discarded concrete blocks arising from the demolishing of existing buildings. This paper presents research on the seismic performance of RCB masonry walls to assess the applicability of RCB for use in rural low-rise constructions. The seismic performance of a masonry wall is closely related to the vertical load applied to the wall. Thus, the compressive performance of RCB masonry was investigated firstly by constructing and testing eighteen RCB masonry compressive specimens with different mortar strengths. The load-bearing capacity, deformation and failure characteristic were analyzed, as well. Then, a quasi-static test was carried out to study the seismic behavior of RCB walls by eight RCB masonry walls subjected to an axial compressive load and a reversed cyclic lateral load. Based on the test results, equations for predicting the compressive strength of RCB masonry and the lateral ultimate strength of an RCB masonry wall were proposed. Experimental values were found to be in good agreement with the predicted values. Meanwhile, finite element analysis (FEA and parametric analysis of the RCB walls were carried out using ABAQUS software. The elastic-plastic deformation characteristics and the lateral load-displacement relations were studied.

  13. Performance of adsorbent-embedded heat exchangers using binder-coating method

    KAUST Repository

    Li, Ang

    2016-01-01

    The performance of adsorption (AD) chillers or desalination cycles is dictated by the rates of heat and mass transfer of adsorbate in adsorbent-packed beds. Conventional granular-adsorbent, packed in fin-tube heat exchangers, suffered from poor heat transfer in heating (desorption) or cooling (adsorption) processes of the batch-operated cycles, with undesirable performance parameters such as higher footprint of plants, low coefficient of performance (COP) of AD cycles and higher capital cost of the machines. The motivation of present work is to mitigate the heat and mass "bottlenecks" of fin-tube heat exchangers by using a powdered-adsorbent cum binder coated onto the fin surfaces of exchangers. Suitable adsorbent-binder pairs have been identified for the silica gel adsorbent with pore surface areas up to 680 m2/g and pore diameters less than 6 nm. The parent silica gel remains largely unaffected despite being pulverized into fine particles of 100 μm, and yet maintaining its water uptake characteristics. The paper presents an experimental study on the selection and testing processes to achieve high efficacy of adsorbent-binder coated exchangers. The test results indicate 3.4-4.6 folds improvement in heat transfer rates over the conventional granular-packed method, resulting a faster rate of water uptake by 1.5-2 times on the suitable silica gel type. © 2015 Elsevier Ltd. All rights reserved.

  14. Noncredible cognitive performance at clinical evaluation of adult ADHD: An embedded validity indicator in a visuospatial working memory test.

    Science.gov (United States)

    Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara

    2017-12-01

    The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. The utility of a continuous performance test embedded in virtual reality in measuring ADHD-related deficits.

    Science.gov (United States)

    Pollak, Yehuda; Weiss, Patricia L; Rizzo, Albert A; Weizer, Merav; Shriki, Liron; Shalev, Ruth S; Gross-Tsur, Varda

    2009-02-01

    Continuous performance tasks (CPT) are popular in the diagnostic process of Attention Deficit/Hyperactivity Disorder (ADHD), providing an objective measure of attention for a disorder with otherwise subjective criteria. Aims of the study were to: (1) compare the performance of children with ADHD on a CPT embedded within a virtual reality classroom (VR-CPT) to the currently used Test of Variables of Attention (TOVA) CPT, and (2) assess how the VR environment is experienced. Thirty-seven boys, 9 to 17 years, with (n = 20) and without ADHD (n = 17) underwent 3 CPT's: VR-CPT, the same CPT without VR (No VR-CPT) and the TOVA. Immediately following CPT, subjects described their subjective experiences on the Short Feedback Questionnaire. Results were analyzed using analysis of variance with repeated measures. Children with ADHD performed poorer on all CPT's. The VR-CPT showed similar effect sizes to the TOVA. Subjective feelings of enjoyment were most positive for VR-CPT. The VR-CPT is a sensitive and user-friendly assessment tool to aid diagnosis in ADHD.

  16. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  17. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  18. Wireless Performance of a Fully Passive Neurorecording Microsystem Embedded in Dispersive Human Head Phantom

    Science.gov (United States)

    Schwerdt, Helen N.; Chae, Junseok; Miranda, Felix A.

    2012-01-01

    This paper reports the wireless performance of a biocompatible fully passive microsystem implanted in phantom media simulating the dispersive dielectric properties of the human head, for potential application in recording cortical neuropotentials. Fully passive wireless operation is achieved by means of backscattering electromagnetic (EM) waves carrying 3rd order harmonic mixing products (2f(sub 0) plus or minus f(sub m)=4.4-4.9 GHZ) containing targeted neuropotential signals (fm approximately equal to 1-1000 Hz). The microsystem is enclosed in 4 micrometer thick parylene-C for biocompatibility and has a footprint of 4 millimeters x 12 millimeters x 500 micrometers. Preliminary testing of the microsystem implanted in the lossy biological simulating media results in signal-to-noise ratio's (SNR) near 22 (SNR approximately equal to 38 in free space) for millivolt level neuropotentials, demonstrating the potential for fully passive wireless microsystems in implantable medical applications.

  19. Scheduling and Optimization of Fault-Tolerant Embedded Systems with Transparency/Performance Trade-Offs

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2012-01-01

    In this article, we propose a strategy for the synthesis of fault-tolerant schedules and for the mapping of fault-tolerant applications. Our techniques handle transparency/performance trade-offs and use the faultoccurrence information to reduce the overhead due to fault tolerance. Processes...... and messages are statically scheduled, and we use process reexecution for recovering from multiple transient faults. We propose a finegrained transparent recovery, where the property of transparency can be selectively applied to processes and messages. Transparency hides the recovery actions in a selected part...... of the application so that they do not affect the schedule of other processes and messages. While leading to longer schedules, transparent recovery has the advantage of both improved debuggability and less memory needed to store the faulttolerant schedules....

  20. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  1. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  2. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  3. A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications

    Energy Technology Data Exchange (ETDEWEB)

    Iaccarino, Gianluca

    2014-04-01

    Multiphysics processes modeled by a system of unsteady di erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.

  4. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  5. Vapor Annealing Controlled Crystal Growth and Photovoltaic Performance of Bismuth Triiodide Embedded in Mesostructured Configurations.

    Science.gov (United States)

    Kulkarni, Ashish; Singh, Trilok; Jena, Ajay K; Pinpithak, Peerathat; Ikegami, Masashi; Miyasaka, Tsutomu

    2018-03-21

    Low stability of organic-inorganic lead halide perovskite and toxicity of lead (Pb) still remain a concern. Therefore, there is a constant quest for alternative nontoxic and stable light-absorbing materials with promising optoelectronic properties. Herein, we report about nontoxic bismuth triiodide (BiI 3 ) photovoltaic device prepared using TiO 2 mesoporous film and spiro-OMeTAD as electron- and hole-transporting materials, respectively. Effect of annealing methods (e.g., thermal annealing (TA), solvent vapor annealing (SVA), and Petri dish covered recycled vapor annealing (PR-VA)) and different annealing temperatures (90, 120, 150, and 180 °C for PR-VA) on BiI 3 film morphology have been investigated. As found in the study, grain size increased and film uniformity improved as temperature was raised from 90 to 150 °C. The photovoltaic devices based on BiI 3 films processed at 150 °C with PR-VA treatment showed power conversion efficiency (PCE) of 0.5% with high reproducibility, which is, so far, the best PCE reported for BiI 3 photovoltaic device employing organic hole-transporting material (HTM), owing to the increase in grain size and uniform morphology of BiI 3 film. These devices showed stable performance even after 30 days of exposure to 50% relative humidity, and after 100 °C heat stress and 20 min light soaking test. More importantly, the study reveals many challenges and room (discussed in the details) for further development of the BiI 3 photovoltaic devices.

  6. Performance evaluation of computer and communication systems

    CERN Document Server

    Le Boudec, Jean-Yves

    2011-01-01

    … written by a scientist successful in performance evaluation, it is based on his experience and provides many ideas not only to laymen entering the field, but also to practitioners looking for inspiration. The work can be read systematically as a textbook on how to model and test the derived hypotheses on the basis of simulations. Also, separate parts can be studied, as the chapters are self-contained. … the book can be successfully used either for self-study or as a supplementary book for a lecture. I believe that different types of readers will like it: practicing engineers and resea

  7. Computer task performance by subjects with Duchenne muscular dystrophy.

    Science.gov (United States)

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  8. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  9. Methylphenidate effect in children with ADHD can be measured by an ecologically valid continuous performance test embedded in virtual reality.

    Science.gov (United States)

    Pollak, Yehuda; Shomaly, Hanan Barhoum; Weiss, Patrice L; Rizzo, Albert A; Gross-Tsur, Varda

    2010-02-01

    Continuous performance tasks (CPTs) embedded in a virtual reality (VR) classroom environment have been shown to be a sensitive and user-friendly assessment tool to detect cognitive deficits related to attention-deficit/hyperactivity disorder (ADHD). The aim of the current study was to compare the performance of children with ADHD on a VR-CPT while on and off treatment with methylphenidate (MPH) and to compare the VR-CPT to a currently used CPT, Test of Variables of Attention (TOVA). Twenty-seven children with ADHD underwent the VR-CPT, the same CPT without VR (no VR-CPT), and the TOVA, 1 hour after the ingestion of either placebo or 0.3 mg/kg MPH, in a double-blind, placebo-controlled, crossover design. Immediately following CPT, subjects described their subjective experiences on the Short Feedback Questionnaire. MPH reduced omission errors to a greater extent on the VR-CPT compared to the no VR-CPT and the TOVA, and decreased other CPT measures on all types of CPT to a similar degree. Children rated the VR-CPT as more enjoyable compared to the other types of CPT. It is concluded that the VR-CPT is a sensitive and user-friendly assessment tool in measuring the response to MPH in children with ADHD.

  10. Au-embedded ZnO/NiO hybrid with excellent electrochemical performance as advanced electrode materials for supercapacitor.

    Science.gov (United States)

    Zheng, Xin; Yan, Xiaoqin; Sun, Yihui; Bai, Zhiming; Zhang, Guangjie; Shen, Yanwei; Liang, Qijie; Zhang, Yue

    2015-02-04

    Here we design a nanostructure by embedding Au nanoparticles into ZnO/NiO core-shell composites as supercapacitors electrodes materials. This optimized hybrid electrodes exhibited an excellent electrochemical performance including a long-term cycling stability and a maximum specific areal capacitance of 4.1 F/cm(2) at a current density of 5 mA/cm(2), which is much higher than that of ZnO/NiO hierarchical materials (0.5 F/cm(2)). Such an enhanced property is attributed to the increased electro-electrolyte interfaces, short electron diffusion pathways and good electrical conductivity. Apart from this, electrons can be temporarily trapped and accumulated at the Fermi level (EF') because of the localized schottky barrier at Au/NiO interface in charge process until fill the gap between ZnO and NiO, so that additional electrons can be released during discharge. These results demonstrate that suitable interface engineering may open up new opportunities in the development of high-performance supercapacitors.

  11. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  12. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  13. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  14. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  15. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  16. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  17. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  18. High performance SONOS flash memory with in-situ silicon nanocrystals embedded in silicon nitride charge trapping layer

    Science.gov (United States)

    Lim, Jae-Gab; Yang, Seung-Dong; Yun, Ho-Jin; Jung, Jun-Kyo; Park, Jung-Hyun; Lim, Chan; Cho, Gyu-seok; Park, Seong-gye; Huh, Chul; Lee, Hi-Deok; Lee, Ga-Won

    2018-02-01

    In this paper, SONOS-type flash memory device with highly improved charge-trapping efficiency is suggested by using silicon nanocrystals (Si-NCs) embedded in silicon nitride (SiNX) charge trapping layer. The Si-NCs were in-situ grown by PECVD without additional post annealing process. The fabricated device shows high program/erase speed and retention property which is suitable for multi-level cell (MLC) application. Excellent performance and reliability for MLC are demonstrated with large memory window of ∼8.5 V and superior retention characteristics of 7% charge loss for 10 years. High resolution transmission electron microscopy image confirms the Si-NC formation and the size is around 1-2 nm which can be verified again in X-ray photoelectron spectroscopy (XPS) where pure Si bonds increase. Besides, XPS analysis implies that more nitrogen atoms make stable bonds at the regular lattice point. Photoluminescence spectra results also illustrate that Si-NCs formation in SiNx is an effective method to form deep trap states.

  19. Evaluating the accuracy of the Wechsler Memory Scale-Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test performance.

    Science.gov (United States)

    Soble, Jason R; Bain, Kathleen M; Bailey, K Chase; Kirton, Joshua W; Marceaux, Janice C; Critchfield, Edan A; McCoy, Karin J M; O'Rourke, Justin J F

    2018-01-08

    Embedded performance validity tests (PVTs) allow for continuous assessment of invalid performance throughout neuropsychological test batteries. This study evaluated the utility of the Wechsler Memory Scale-Fourth Edition (WMS-IV) Logical Memory (LM) Recognition score as an embedded PVT using the Advanced Clinical Solutions (ACS) for WAIS-IV/WMS-IV Effort System. This mixed clinical sample was comprised of 97 total participants, 71 of whom were classified as valid and 26 as invalid based on three well-validated, freestanding criterion PVTs. Overall, the LM embedded PVT demonstrated poor concordance with the criterion PVTs and unacceptable psychometric properties using ACS validity base rates (42% sensitivity/79% specificity). Moreover, 15-39% of participants obtained an invalid ACS base rate despite having a normatively-intact age-corrected LM Recognition total score. Receiving operating characteristic curve analysis revealed a Recognition total score cutoff of < 61% correct improved specificity (92%) while sensitivity remained weak (31%). Thus, results indicated the LM Recognition embedded PVT is not appropriate for use from an evidence-based perspective, and that clinicians may be faced with reconciling how a normatively intact cognitive performance on the Recognition subtest could simultaneously reflect invalid performance validity.

  20. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  1. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  2. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  3. Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses

    Directory of Open Access Journals (Sweden)

    James Wolfer

    2015-02-01

    Full Text Available Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that work and expands it with representative examples of assignments, demonstrations, and results as well as describing how the tools and examples deployed for these classes have a residual effect on classes such as Comptuer Literacy.

  4. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  5. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  6. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  7. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  8. An Overview of Reconfigurable Hardware in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Wenyin Fu

    2006-09-01

    Full Text Available Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.

  9. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  10. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  11. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  12. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  13. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  14. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  15. Design and Performance Evaluation of an Adaptive Resource Management Framework for Distributed Real-Time and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Chen Yingming

    2008-01-01

    Full Text Available Abstract Achieving end-to-end quality of service (QoS in distributed real-time embedded (DRE systems require QoS support and enforcement from their underlying operating platforms that integrates many real-time capabilities, such as QoS-enabled network protocols, real-time operating system scheduling mechanisms and policies, and real-time middleware services. As standards-based quality of service (QoS enabled component middleware automates integration and configuration activities, it is increasingly being used as a platform for developing open DRE systems that execute in environments where operational conditions, input workload, and resource availability cannot be characterized accurately a priori. Although QoS-enabled component middleware offers many desirable features, however, it historically lacked the ability to allocate resources efficiently and enable the system to adapt to fluctuations in input workload, resource availability, and operating conditions. This paper presents three contributions to research on adaptive resource management for component-based open DRE systems. First, we describe the structure and functionality of the resource allocation and control engine (RACE, which is an open-source adaptive resource management framework built atop standards-based QoS-enabled component middleware. Second, we demonstrate and evaluate the effectiveness of RACE in the context of a representative open DRE system: NASA's magnetospheric multiscale mission system. Third, we present an empirical evaluation of RACE's scalability as the number of nodes and applications in a DRE system grows. Our results show that RACE is a scalable adaptive resource management framework and yields a predictable and high-performance system, even in the face of changing operational conditions and input workload.

  16. Design and Performance Evaluation of an Adaptive Resource Management Framework for Distributed Real-Time and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Chenyang Lu

    2008-04-01

    Full Text Available Achieving end-to-end quality of service (QoS in distributed real-time embedded (DRE systems require QoS support and enforcement from their underlying operating platforms that integrates many real-time capabilities, such as QoS-enabled network protocols, real-time operating system scheduling mechanisms and policies, and real-time middleware services. As standards-based quality of service (QoS enabled component middleware automates integration and configuration activities, it is increasingly being used as a platform for developing open DRE systems that execute in environments where operational conditions, input workload, and resource availability cannot be characterized accurately a priori. Although QoS-enabled component middleware offers many desirable features, however, it historically lacked the ability to allocate resources efficiently and enable the system to adapt to fluctuations in input workload, resource availability, and operating conditions. This paper presents three contributions to research on adaptive resource management for component-based open DRE systems. First, we describe the structure and functionality of the resource allocation and control engine (RACE, which is an open-source adaptive resource management framework built atop standards-based QoS-enabled component middleware. Second, we demonstrate and evaluate the effectiveness of RACE in the context of a representative open DRE system: NASA's magnetospheric multiscale mission system. Third, we present an empirical evaluation of RACE's scalability as the number of nodes and applications in a DRE system grows. Our results show that RACE is a scalable adaptive resource management framework and yields a predictable and high-performance system, even in the face of changing operational conditions and input workload.

  17. An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Sites

    Directory of Open Access Journals (Sweden)

    Iliya Georgiev

    2017-04-01

    Full Text Available The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a a geographically unique identification scheme supporting a global key space for a key-value store; (b a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c a broadcast of the archaeological information as an Extensible Markup Language (XML stream over the Web for worldwide availability; and (d a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology.

  18. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  19. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  20. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  1. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  2. Smart Multicore Embedded Systems

    DEFF Research Database (Denmark)

    This book provides a single-source reference to the state-of-the-art of high-level programming models and compilation tool-chains for embedded system platforms. The authors address challenges faced by programmers developing software to implement parallel applications in embedded systems, where very...... specificities of various embedded systems from different industries. Parallel programming tool-chains are described that take as input parameters both the application and the platform model, then determine relevant transformations and mapping decisions on the concrete platform, minimizing user intervention...... and hiding the difficulties related to the correct and efficient use of memory hierarchy and low level code generation. Describes tools and programming models for multicore embedded systems Emphasizes throughout performance per watt scalability Discusses realistic limits of software parallelization Enables...

  3. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  4. Embedded Thermal Control for Spacecraft Subsystems Miniaturization

    Science.gov (United States)

    Didion, Jeffrey R.

    2014-01-01

    Optimization of spacecraft size, weight and power (SWaP) resources is an explicit technical priority at Goddard Space Flight Center. Embedded Thermal Control Subsystems are a promising technology with many cross cutting NSAA, DoD and commercial applications: 1.) CubeSatSmallSat spacecraft architecture, 2.) high performance computing, 3.) On-board spacecraft electronics, 4.) Power electronics and RF arrays. The Embedded Thermal Control Subsystem technology development efforts focus on component, board and enclosure level devices that will ultimately include intelligent capabilities. The presentation will discuss electric, capillary and hybrid based hardware research and development efforts at Goddard Space Flight Center. The Embedded Thermal Control Subsystem development program consists of interrelated sub-initiatives, e.g., chip component level thermal control devices, self-sensing thermal management, advanced manufactured structures. This presentation includes technical status and progress on each of these investigations. Future sub-initiatives, technical milestones and program goals will be presented.

  5. Embedded Systems Design with FPGAs

    CERN Document Server

    Pnevmatikatos, Dionisios; Sklavos, Nicolas

    2013-01-01

    This book presents methodologies for modern applications of embedded systems design, using field programmable gate array (FPGA) devices.  Coverage includes state-of-the-art research from academia and industry on a wide range of topics, including advanced electronic design automation (EDA), novel system architectures, embedded processors, arithmetic, dynamic reconfiguration and applications. Describes a variety of methodologies for modern embedded systems design;  Implements methodologies presented on FPGAs; Covers a wide variety of applications for reconfigurable embedded systems, including Bioinformatics, Communications and networking, Application acceleration, Medical solutions, Experiments for high energy physics, Astronomy, Aerospace, Biologically inspired systems and Computational fluid dynamics (CFD).

  6. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Olsen, Jógvan Magnus Haugaard; Steinmann, Casper; Ruud, Kenneth

    2015-01-01

    We present a new QM/QM/MM-based model for calculating molecular properties and excited states of solute-solvent systems. We denote this new approach the polarizable density embedding (PDE) model and it represents an extension of our previously developed polarizable embedding (PE) strategy. The PDE...... model is a focused computational approach in which a core region of the system studied is represented by a quantum-chemical method, whereas the environment is divided into two other regions: an inner and an outer region. Molecules belonging to the inner region are described by their exact densities...

  7. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  8. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  9. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  10. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  11. Embedded defects

    International Nuclear Information System (INIS)

    Barriola, M.; Vachaspati, T.; Bucher, M.

    1994-01-01

    We give a prescription for embedding classical solutions and, in particular, topological defects in field theories which are invariant under symmetry groups that are not necessarily simple. After providing examples of embedded defects in field theories based on simple groups, we consider the electroweak model and show that it contains the Z string and a one-parameter family of strings called the W(α) string. It is argued that although the members of this family are gauge equivalent when considered in isolation, each member becomes physically distinct when multistring configurations are considered. We then turn to the issue of stability of embedded defects and demonstrate the instability of a large class of such solutions in the absence of bound states or condensates. The Z string is shown to be unstable for all values of the Higgs boson mass when θ W =π/4. W strings are also shown to be unstable for a large range of parameters. Embedded monopoles suffer from the Brandt-Neri-Coleman instability. Finally, we connect the electroweak string solutions to the sphaleron

  12. Linking Course-Embedded Assessment Measures and Performance on the Educational Testing Service Major Field Test in Business

    Science.gov (United States)

    Barboza, Gustavo A.; Pesek, James

    2012-01-01

    Assessment of the business curriculum and its learning goals and objectives has become a major field of interest for business schools. The exploratory results of the authors' model using a sample of 173 students show robust support for the hypothesis that high marks in course-embedded assessment on business-specific analytical skills positively…

  13. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  14. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  15. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  16. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  17. Tensor Train Neighborhood Preserving Embedding

    Science.gov (United States)

    Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin

    2018-05-01

    In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.

  18. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  19. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  20. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  1. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  2. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  3. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  4. Performativity, Fabrication and Trust: Exploring Computer-Mediated Moderation

    Science.gov (United States)

    Clapham, Andrew

    2013-01-01

    Based on research conducted in an English secondary school, this paper explores computer-mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to…

  5. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  6. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  7. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  8. Diverse Power Iteration Embeddings and Its Applications

    Energy Technology Data Exchange (ETDEWEB)

    Huang H.; Yoo S.; Yu, D.; Qin, H.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detection and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.

  9. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  10. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  11. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  12. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  13. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  14. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  15. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  16. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  17. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  18. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  19. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  20. Performance measurements in 3D ideal magnetohydrodynamic stability computations

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cooper, W.A.; Gruber, R.; Schwenn, U.

    1989-10-01

    The 3D ideal magnetohydrodynamic stability code TERPSICHORE has been designed to take advantage of vector and microtasking capabilities of the latest CRAY computers. To keep the number of operations small most efficient algorithms have been applied in each computational step. The program investigates the stability properties of fusion reactor relevant plasma configurations confined by magnetic fields. For a typical 3D HELIAS configuration that has been considered we obtain an overall performance in excess of 1 Gflops on an eight processor CRAY-YMP machine. (author) 3 figs., 1 tab., 11 refs

  1. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  2. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  3. Computer-assisted machine-to-human protocols for authentication of a RAM-based embedded system

    Science.gov (United States)

    Idrissa, Abdourhamane; Aubert, Alain; Fournel, Thierry

    2012-06-01

    Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled for goods verification? In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to attacks. At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted M0 is computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC M0 (to be recomputed), keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.

  4. Computer-aided performance monitoring program at Diablo Canyon

    International Nuclear Information System (INIS)

    Nelson, T.; Glynn, R. III; Kessler, T.C.

    1992-01-01

    This paper describes the thermal performance monitoring program at Pacific Gas ampersand Electric Company's (PG ampersand E's) Diablo Canyon Nuclear Power Plant. The plant performance monitoring program at Diablo Canyon uses the THERMAC performance monitoring and analysis computer software provided by Expert-EASE Systems. THERMAC is used to collect performance data from the plant process computers, condition that data to adjust for measurement errors and missing data points, evaluate cycle and component-level performance, archive the data for trend analysis and generate performance reports. The current status of the program is that, after a fair amount of open-quotes tuningclose quotes of the basic open-quotes thermal kitclose quotes models provided with the initial THERMAC installation, we have successfully baselined both units to cycle isolation test data from previous reload cycles. Over the course of the past few months, we have accumulated enough data to generate meaningful performance trends and, as a result, have been able to use THERMAC to track a condenser fouling problem that was costing enough megawatts to attract corporate-level attention. Trends from THERMAC clearly related the megawatt loss to a steadily degrading condenser cleanliness factor and verified the subsequent gain in megawatts after the condenser was cleaned. In the future, we expect to rebaseline THERMAC to a beginning of cycle (BOC) data set and to use the program to help track feedwater nozzle fouling

  5. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  6. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  7. Heat exchanger performance analysis programs for the personal computer

    International Nuclear Information System (INIS)

    Putman, R.E.

    1992-01-01

    Numerous utility industry heat exchange calculations are repetitive and thus lend themselves to being performed on a Personal Computer. These programs may be regarded as engineering tools which, when put together, can form a Toolbox. However, the practicing Results Engineer in the utility industry desires not only programs that are robust as well as easy to use but can also be used both on desktop and laptop PC's. The latter also offer the opportunity to take the computer into the plant or control room, and use it there to process test or operating data right on the spot. Most programs evolve through the needs which arise in the course of day-to-day work. This paper describes several of the more useful programs of this type and outlines some of the guidelines to be followed when designing personal computer programs for use by the practicing Results Engineer

  8. FLUKA-LIVE-an embedded framework, for enabling a computer to execute FLUKA under the control of a Linux OS

    International Nuclear Information System (INIS)

    Cohen, A.; Battistoni, G.; Mark, S.

    2008-01-01

    This paper describes a Linux-based OS framework for integrating the FLUKA Monte Carlo software (currently distributed only for Linux) into a CD-ROM, resulting in a complete environment for a scientist to edit, link and run FLUKA routines-without the need to install a UNIX/Linux operating system. The building process includes generating from scratch a complete operating system distribution which will, when operative, build all necessary components for successful operation of FLUKA software and libraries. Various source packages, as well as the latest kernel sources, are freely available from the Internet. These sources are used to create a functioning Linux system that integrates several core utilities in line with the main idea-enabling FLUKA to act as if it was running under a popular Linux distribution or even a proprietary UNIX workstation. On boot-up a file system will be created and the contents from the CD will be uncompressed and completely loaded into RAM-after which the presence of the CD is no longer necessary, and could be removed for use on a second computer. The system can operate on any i386 PC as long as it can boot from a CD

  9. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  10. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  11. Embedded Hardware

    CERN Document Server

    Ganssle, Jack G; Eady, Fred; Edwards, Lewin; Katz, David J; Gentile, Rick

    2007-01-01

    The Newnes Know It All Series takes the best of what our authors have written to create hard-working desk references that will be an engineer's first port of call for key information, design techniques and rules of thumb. Guaranteed not to gather dust on a shelf!. Circuit design using microcontrollers is both a science and an art. This book covers it all. It details all of the essential theory and facts to help an engineer design a robust embedded system. Processors, memory, and the hot topic of interconnects (I/O) are completely covered. Our authors bring a wealth of experience and ideas; thi

  12. Performance monitoring for brain-computer-interface actions.

    Science.gov (United States)

    Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf

    2017-02-01

    When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  14. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  15. A performance evaluation of the IBM 370/XT personal computer

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  16. Embedding potentials for excited states of embedded species

    International Nuclear Information System (INIS)

    Wesolowski, Tomasz A.

    2014-01-01

    Frozen-Density-Embedding Theory (FDET) is a formalism to obtain the upper bound of the ground-state energy of the total system and the corresponding embedded wavefunction by means of Euler-Lagrange equations [T. A. Wesolowski, Phys. Rev. A 77(1), 012504 (2008)]. FDET provides the expression for the embedding potential as a functional of the electron density of the embedded species, electron density of the environment, and the field generated by other charges in the environment. Under certain conditions, FDET leads to the exact ground-state energy and density of the whole system. Following Perdew-Levy theorem on stationary states of the ground-state energy functional, the other-than-ground-state stationary states of the FDET energy functional correspond to excited states. In the present work, we analyze such use of other-than-ground-state embedded wavefunctions obtained in practical calculations, i.e., when the FDET embedding potential is approximated. Three computational approaches based on FDET, that assure self-consistent excitation energy and embedded wavefunction dealing with the issue of orthogonality of embedded wavefunctions for different states in a different manner, are proposed and discussed

  17. Embedded Sensors and Controls to Improve Component Performance and Reliability - System Dynamics Modeling and Control System Design

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kisner, Roger A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2013-10-01

    This report documents the current status of the modeling, control design, and embedded control research for the magnetic bearing canned rotor pump being used as a demonstration platform for deeply integrating instrumentation and controls (I{\\&}C) into nuclear power plant components. This pump is a highly inter-connected thermo/electro/mechanical system that requires an active control system to operate. Magnetic bearings are inherently unstable system and without active, moment by moment control, the rotor would contact fixed surfaces in the pump causing physical damage. This report details the modeling of the pump rotordynamics, fluid forces, electromagnetic properties of the protective cans, active magnetic bearings, power electronics, and interactions between different dynamical models. The system stability of the unforced and controlled rotor are investigated analytically. Additionally, controllers are designed using proportional derivative (PD) control, proportional integral derivative (PID) control, voltage control, and linear quadratic regulator (LQR) control. Finally, a design optimization problem that joins the electrical, mechanical, magnetic, and control system design into one problem to balance the opposing needs of various design criteria using the embedded system approach is presented.

  18. Neuroanatomical correlates of brain-computer interface performance.

    Science.gov (United States)

    Kasahara, Kazumi; DaSalla, Charles Sayo; Honda, Manabu; Hanakawa, Takashi

    2015-04-15

    Brain-computer interfaces (BCIs) offer a potential means to replace or restore lost motor function. However, BCI performance varies considerably between users, the reasons for which are poorly understood. Here we investigated the relationship between sensorimotor rhythm (SMR)-based BCI performance and brain structure. Participants were instructed to control a computer cursor using right- and left-hand motor imagery, which primarily modulated their left- and right-hemispheric SMR powers, respectively. Although most participants were able to control the BCI with success rates significantly above chance level even at the first encounter, they also showed substantial inter-individual variability in BCI success rate. Participants also underwent T1-weighted three-dimensional structural magnetic resonance imaging (MRI). The MRI data were subjected to voxel-based morphometry using BCI success rate as an independent variable. We found that BCI performance correlated with gray matter volume of the supplementary motor area, supplementary somatosensory area, and dorsal premotor cortex. We suggest that SMR-based BCI performance is associated with development of non-primary somatosensory and motor areas. Advancing our understanding of BCI performance in relation to its neuroanatomical correlates may lead to better customization of BCIs based on individual brain structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  20. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  1. Scintillator performance considerations for dedicated breast computed tomography

    Science.gov (United States)

    Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew

    2017-09-01

    Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.

  2. Development of wireless brain computer interface with embedded multitask scheduling and its application on real-time driver's drowsiness detection and warning.

    Science.gov (United States)

    Lin, Chin-Teng; Chen, Yu-Chieh; Huang, Teng-Yi; Chiu, Tien-Ting; Ko, Li-Wei; Liang, Sheng-Fu; Hsieh, Hung-Yi; Hsu, Shang-Hwa; Duann, Jeng-Ren

    2008-05-01

    Biomedical signal monitoring systems have been rapidly advanced with electronic and information technologies in recent years. However, most of the existing physiological signal monitoring systems can only record the signals without the capability of automatic analysis. In this paper, we proposed a novel brain-computer interface (BCI) system that can acquire and analyze electroencephalogram (EEG) signals in real-time to monitor human physiological as well as cognitive states, and, in turn, provide warning signals to the users when needed. The BCI system consists of a four-channel biosignal acquisition/amplification module, a wireless transmission module, a dual-core signal processing unit, and a host system for display and storage. The embedded dual-core processing system with multitask scheduling capability was proposed to acquire and process the input EEG signals in real time. In addition, the wireless transmission module, which eliminates the inconvenience of wiring, can be switched between radio frequency (RF) and Bluetooth according to the transmission distance. Finally, the real-time EEG-based drowsiness monitoring and warning algorithms were implemented and integrated into the system to close the loop of the BCI system. The practical online testing demonstrates the feasibility of using the proposed system with the ability of real-time processing, automatic analysis, and online warning feedback in real-world operation and living environments.

  3. What is the value of embedding artificial emotional prosody in human computer interactions? Implications for theory and design in psychological science.

    Directory of Open Access Journals (Sweden)

    Rachel L. C. Mitchell

    2015-11-01

    Full Text Available In computerised technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today’s artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognised in human-computer interaction contexts. Current attempts to artificially synthesise emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesised nonverbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioural, socio-cognitive and neural levels.

  4. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  5. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  6. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  7. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  8. Bootstrap embedding: An internally consistent fragment-based method

    Energy Technology Data Exchange (ETDEWEB)

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States)

    2016-08-21

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments “embedded” in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed “Bootstrap Embedding,” a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  9. Performance Analysis and Application of Three Different Computational Methods for Solar Heating System with Seasonal Water Tank Heat Storage

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2013-01-01

    Full Text Available We analyze and compare three different computational methods for a solar heating system with seasonal water tank heat storage (SHS-SWTHS. These methods are accurate numerical method, temperature stratification method, and uniform temperature method. The accurate numerical method can accurately predict the performance of the system, but it takes about 4 to 5 weeks, which is too long and hard for the performance analysis of this system. The temperature stratification method obtains relatively accurate computation results and takes a relatively short computation time, which is about 2 to 3 hours. Therefore, this method is most suitable for the performance analysis of this system. The deviation of the computational results of the uniform temperature method is great, and the time consumed is similar to that of the temperature stratification method. Therefore, this method is not recommended herein. Based on the above analyses, the temperature stratification method is applied to analyze the influence of the embedded depth of water tank, the thickness of thermal insulation material, and the collection area on the performance of this system. The results will provide a design basis for the related demonstration projects.

  10. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  11. Performance of an extrapolation chamber in computed tomography standard beams

    International Nuclear Information System (INIS)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E.

    2017-01-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  12. Performance of an extrapolation chamber in computed tomography standard beams

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E., E-mail: mcastro@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-07-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  13. Evaluating computer program performance on the CRAY-1

    International Nuclear Information System (INIS)

    Rudsinski, L.; Pieper, G.W.

    1979-01-01

    The Advanced Scientific Computers Project of Argonne's Applied Mathematics Division has two objectives: to evaluate supercomputers and to determine their effect on Argonne's computing workload. Initial efforts have focused on the CRAY-1, which is the only advanced computer currently available. Users from seven Argonne divisions executed test programs on the CRAY and made performance comparisons with the IBM 370/195 at Argonne. This report describes these experiences and discusses various techniques for improving run times on the CRAY. Direct translations of code from scalar to vector processor reduced running times as much as two-fold, and this reduction will become more pronounced as the CRAY compiler is developed. Further improvement (two- to ten-fold) was realized by making minor code changes to facilitate compiler recognition of the parallel and vector structure within the programs. Finally, extensive rewriting of the FORTRAN code structure reduced execution times dramatically, in three cases by a factor of more than 20; and even greater reduction should be possible by changing algorithms within a production code. It is condluded that the CRAY-1 would be of great benefit to Argonne researchers. Existing codes could be modified with relative ease to run significantly faster than on the 370/195. More important, the CRAY would permit scientists to investigate complex problems currently deemed infeasibile on traditional scalar machines. Finally, an interface between the CRAY-1 and IBM computers such as the 370/195, scheduled by Cray Research for the first quarter of 1979, would considerably facilitate the task of integrating the CRAY into Argonne's Central Computing Facility. 13 tables

  14. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  15. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  16. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  17. Enhanced Performance of Nanowire-Based All-TiO2 Solar Cells using Subnanometer-Thick Atomic Layer Deposited ZnO Embedded Layer

    International Nuclear Information System (INIS)

    Ghobadi, Amir; Yavuz, Halil I.; Ulusoy, T. Gamze; Icli, K. Cagatay; Ozenbas, Macit; Okyay, Ali K.

    2015-01-01

    In this paper, the effect of angstrom-thick atomic layer deposited (ALD) ZnO embedded layer on photovoltaic (PV) performance of Nanowire-Based All-TiO 2 solar cells has been systematically investigated. Our results indicate that by varying the thickness of ZnO layer the efficiency of the solar cell can be significantly changed. It is shown that the efficiency has its maximum for optimal thickness of 1 ALD cycle in which this ultrathin ZnO layer improves device performance through passivation of surface traps without hampering injection efficiency of photogenerated electrons. The mechanisms contributing to this unprecedented change in PV performance of the cell have been scrutinized and discussed

  18. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  19. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  20. SnS{sub 2} nanoplates embedded in 3D interconnected graphene network as anode material with superior lithium storage performance

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Hongli [Hunan Key Laboratory of Micro-Nano Energy Materials and Devices, and School of Physics and Optoelectronics, Xiangtan University, Hunan 411105 (China); Qi, Xiang, E-mail: xqi@xtu.edu.cn [Hunan Key Laboratory of Micro-Nano Energy Materials and Devices, and School of Physics and Optoelectronics, Xiangtan University, Hunan 411105 (China); Han, Weijia; Ren, Long; Liu, Yundan [Hunan Key Laboratory of Micro-Nano Energy Materials and Devices, and School of Physics and Optoelectronics, Xiangtan University, Hunan 411105 (China); Wang, Xingyan, E-mail: xywangxtu@163.com [Department of Environmental Science and Engineering, College of Chemical Engineering, Xiangtan University, Xiangtan 411105 (China); Zhong, Jianxin [Hunan Key Laboratory of Micro-Nano Energy Materials and Devices, and School of Physics and Optoelectronics, Xiangtan University, Hunan 411105 (China)

    2015-11-15

    Graphical abstract: Schematic formation process of 3D interconnected SnS{sub 2}/graphene composite, and its superior lithium storage performance. - Highlights: • 3D graphene network embedded with SnS{sub 2} is synthesized by a facile two-step method. • This structure produces a synergistic effect between graphene and SnS{sub 2} nanoplates. • High capacity, excellent cycle performance and good rate capability are achieved. - Abstract: Three-dimensional (3D) interconnected graphene network embedded with uniformly distributed tin disulfide (SnS{sub 2}) nanoplates was prepared by a facile two-step method. The microstructures and morphologies of the SnS{sub 2}/graphene nanocomposite (SSG) are experimentally confirmed by X-ray diffraction (XRD), Raman spectroscopy, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Using the as-prepared SSG as an anode material for lithium batteries, its electrochemical performances were investigated by cyclic voltammograms (CV), charge/discharge tests, galvanostatic cycling performance and AC impedance spectroscopy. The results demonstrate that the as-prepared SSG exhibits excellent cycling performance with a capacity of 1060 mAh g{sup −1} retained after 200 charge/discharge cycles at a current density of 100 mA g{sup −1}, also a superior rate capability of 670 mAh g{sup −1} even at such a high current density of 2000 mA g{sup −1}. This favorable performance can be attributed to the unique 3D interconnected architecture with great electro-conductivity and its intimate contact with SnS{sub 2}. Our results indicate a potential application of this novel 3D SnS{sub 2}/graphene nanocomposite in lithium-ion battery.

  1. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  2. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... zones connected by open areas with pressure and buoyancy driven air flow. The two programs are interconnected in an iterative procedure. The paper shows also an evaluation of the air quality in the main area of the buildings based on CFD predictions. It is shown that an interconnection between a CFD...

  3. Small private key MQPKS on an embedded microprocessor.

    Science.gov (United States)

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  4. Small Private Key MQPKS on an Embedded Microprocessor

    Directory of Open Access Journals (Sweden)

    Hwajeong Seo

    2014-03-01

    Full Text Available Multivariate quadratic (MQ cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011, a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  5. Small Private Key PKS on an Embedded Microprocessor

    Science.gov (United States)

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  6. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  7. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  8. Performance of scientific computing platforms with MCNP4B

    International Nuclear Information System (INIS)

    McLaughlin, H.E.; Hendricks, J.S.

    1998-01-01

    Several computing platforms were evaluated with the MCNP4B Monte Carlo radiation transport code. The DEC AlphaStation 500/500 was the fastest to run MCNP4B. Compared to the HP 9000-735, the fastest platform 4 yr ago, the AlphaStation is 335% faster, the HP C180 is 133% faster, the SGI Origin 2000 is 82% faster, the Cray T94/4128 is 1% faster, the IBM RS/6000-590 is 93% as fast, the DEC 3000/600 is 81% as fast, the Sun Sparc20 is 57% as fast, the Cray YMP 8/8128 is 57% as fast, the sun Sparc5 is 33% as fast, and the Sun Sparc2 is 13% as fast. All results presented are reproducible and allow for comparison to computer platforms not included in this study. Timing studies are seen to be very problem dependent. The performance gains resulting from advances in software were also investigated. Various compilers and operating systems were seen to have a modest impact on performance, whereas hardware improvements have resulted in a factor of 4 improvement. MCNP4B also ran approximately as fast as MCNP4A

  9. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  10. Play for Performance: Using Computer Games to Improve Motivation and Test-Taking Performance

    Science.gov (United States)

    Dennis, Alan R.; Bhagwatwar, Akshay; Minas, Randall K.

    2013-01-01

    The importance of testing, especially certification and high-stakes testing, has increased substantially over the past decade. Building on the "serious gaming" literature and the psychology "priming" literature, we developed a computer game designed to improve test-taking performance using psychological priming. The game primed…

  11. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  12. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  13. STFTP: Secure TFTP Protocol for Embedded Multi-Agent Systems Communication

    Directory of Open Access Journals (Sweden)

    ZAGAR, D.

    2013-05-01

    Full Text Available Today's embedded systems have evolved into multipurpose devices moving towards an embedded multi-agent system (MAS infrastructure. With the involvement of MAS in embedded systems, one remaining issues is establishing communication between agents in low computational power and low memory embedded systems without present Embedded Operating System (EOS. One solution is the extension of an outdated Trivial File Transfer Protocol (TFTP. The main advantage of using TFTP in embedded systems is the easy implementation. However, the problem at hand is the overall lack of security mechanisms in TFTP. This paper proposes an extension to the existing TFTP in a form of added security mechanisms: STFTP. The authentication is proposed using Digest Access Authentication process whereas the data encryption can be performed by various cryptographic algorithms. The proposal is experimentally tested using two embedded systems based on micro-controller architecture. Communication is analyzed for authentication, data rate and transfer time versus various data encryption ciphers and files sizes. STFTP results in an expected drop in performance, which is in the range of similar encryption algorithms. The system could be improved by using embedded systems of higher computational power or by the use of hardware encryption modules.

  14. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  15. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  16. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  17. Performance characteristics of a Kodak computed radiography system.

    Science.gov (United States)

    Bradford, C D; Peppler, W W; Dobbins, J T

    1999-01-01

    The performance characteristics of a photostimulable phosphor based computed radiographic (CR) system were studied. The modulation transfer function (MTF), noise power spectra (NPS), and detective quantum efficiency (DQE) of the Kodak Digital Science computed radiography (CR) system (Eastman Kodak Co.-model 400) were measured and compared to previously published results of a Fuji based CR system (Philips Medical Systems-PCR model 7000). To maximize comparability, the same measurement techniques and analysis methods were used. The DQE at four exposure levels (30, 3, 0.3, 0.03 mR) and two plate types (standard and high resolution) were calculated from the NPS and MTF measurements. The NPS was determined from two-dimensional Fourier analysis of uniformly exposed plates. The presampling MTF was determined from the Fourier transform (FT) of the system's finely sampled line spread function (LSF) as produced by a narrow slit. A comparison of the slit type ("beveled edge" versus "straight edge") and its effect on the resulting MTF measurements was also performed. The results show that both systems are comparable in resolution performance. The noise power studies indicated a higher level of noise for the Kodak images (approximately 20% at the low exposure levels and 40%-70% at higher exposure levels). Within the clinically relevant exposure range (0.3-3 mR), the resulting DQE for the Kodak plates ranged between 20%-50% lower than for the corresponding Fuji plates. Measurements of the presampling MTF with the two slit types have shown that a correction factor can be applied to compensate for transmission through the relief edges.

  18. Noncredible cognitive performance at clinical evaluation of adult ADHD : An embedded validity indicator in a visuospatial working memory test

    NARCIS (Netherlands)

    Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara

    2017-01-01

    The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development

  19. Parametric embedding for class visualization.

    Science.gov (United States)

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  20. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  1. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  2. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  3. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  4. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  6. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  7. Modeling and optimization of parallel and distributed embedded systems

    CERN Document Server

    Munir, Arslan; Ranka, Sanjay

    2016-01-01

    This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.

  8. Integrated Optical Interconnect Architectures for Embedded Systems

    CERN Document Server

    Nicolescu, Gabriela

    2013-01-01

    This book provides a broad overview of current research in optical interconnect technologies and architectures. Introductory chapters on high-performance computing and the associated issues in conventional interconnect architectures, and on the fundamental building blocks for integrated optical interconnect, provide the foundations for the bulk of the book which brings together leading experts in the field of optical interconnect architectures for data communication. Particular emphasis is given to the ways in which the photonic components are assembled into architectures to address the needs of data-intensive on-chip communication, and to the performance evaluation of such architectures for specific applications.   Provides state-of-the-art research on the use of optical interconnects in Embedded Systems; Begins with coverage of the basics for high-performance computing and optical interconnect; Includes a variety of on-chip optical communication topologies; Features coverage of system integration and opti...

  9. A collaborative brain-computer interface for improving human performance.

    Directory of Open Access Journals (Sweden)

    Yijun Wang

    Full Text Available Electroencephalogram (EEG based brain-computer interfaces (BCI have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1 Event-related potentials (ERP averaging, (2 Feature concatenating, and (3 Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100-250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC, which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior.

  10. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  11. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  12. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  13. Software for Embedded Control Systems

    NARCIS (Netherlands)

    Broenink, Johannes F.; Hilderink, G.H.; Jovanovic, D.S.

    2001-01-01

    The research of our team deals with the realization of control schemes on digital computers. As such the emphasis is on embedded control software implementation. Applications are in the field of mechatronic devices, using a mechatronic design approach (the integrated and optimal design of a

  14. Design Example of Useful Memory Latency for Developing a Hazard Preventive Pipeline High-Performance Embedded-Microprocessor

    Directory of Open Access Journals (Sweden)

    Ching-Hwa Cheng

    2013-01-01

    Full Text Available The existence of structural, control, and data hazards presents a major challenge in designing an advanced pipeline/superscalar microprocessor. An efficient memory hierarchy cache-RAM-Disk design greatly enhances the microprocessor's performance. However, there are complex relationships among the memory hierarchy and the functional units in the microprocessor. Most past architectural design simulations focus on the instruction hazard detection/prevention scheme from the viewpoint of function units. This paper emphasizes that additional inboard memory can be well utilized to handle the hazardous conditions. When the instruction meets hazardous issues, the memory latency can be utilized to prevent performance degradation due to the hazard prevention mechanism. By using the proposed technique, a better architectural design can be rapidly validated by an FPGA at the start of the design stage. In this paper, the simulation results prove that our proposed methodology has a better performance and less power consumption compared to the conventional hazard prevention technique.

  15. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  16. Embedded Fiber Optic Sensors for Integral Armor

    National Research Council Canada - National Science Library

    Fink, Bruce

    2000-01-01

    This report describes the work performed with Production Products Manufacturing & Sales (PPMS), Inc., under the "Liquid Molded Composite Armor Smart Structures Using Embedded Sensors" Small Business Innovative Research...

  17. Conceptualizing Embedded Configuration

    DEFF Research Database (Denmark)

    Oddsson, Gudmundur Valur; Hvam, Lars; Lysgaard, Ole

    2006-01-01

    and services. The general idea can be named embedded configuration. In this article we intend to conceptualize embedded configuration, what it is and is not. The difference between embedded configuration, sales configuration and embedded software is explained. We will look at what is needed to make embedded...... configuration systems. That will include requirements to product modelling techniques. An example with consumer electronics will illuminate the elements of embedded configuration in settings that most can relate to. The question of where embedded configuration would be relevant is discussed, and the current...

  18. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  19. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  20. Teaching Embedded System Concepts for Technological Literacy

    Science.gov (United States)

    Winzker, M.; Schwandt, A.

    2011-01-01

    A basic understanding of technology is recognized as important knowledge even for students not connected with engineering and computer science. This paper shows that embedded system concepts can be taught in a technological literacy course. An embedded system teaching block that has been used in an electronics module for non-engineers is…

  1. Hardware Architecture of Polyphase Filter Banks Performing Embedded Resampling for Software-Defined Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2012-01-01

    , and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones......In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time...... that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load...

  2. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  3. Tablet computer enhanced training improves internal medicine exam performance.

    Science.gov (United States)

    Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (plearning to their respective training programs.

  4. Current configuration and performance of the TFTR computer system

    International Nuclear Information System (INIS)

    Sauthoff, N.R.; Barnes, D.J.; Daniels, R.; Davis, S.; Reid, A.; Snyder, T.; Oliaro, G.; Stark, W.; Thompson, J.R. Jr.

    1986-01-01

    Developments in the TFTR (Tokamak Fusion Test Reactor) computer support system since its startup phases are described. Early emphasis on tokamak process control have been augmented by improved physics data handling, both on-line and off-line. Data acquisition volume and rate have been increased, and data is transmitted automatically to a new VAX-based off-line data reduction system. The number of interface points has increased dramatically, as has the number of man-machine interfaces. The graphics system performance has been accelerated by the introduction of parallelism, and new features such as shadowing and device independence have been added. To support multicycle operation for neutral beam conditioning and independence, the program control system has been generalized. A status and alarm system, including calculated variables, is in the installation phase. System reliability has been enhanced by both the redesign of weaker components and installation of a system status monitor. Development productivity has been enhanced by the addition of tools

  5. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  6. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  7. Performance Comparison of GPU, DSP and FPGA implementations of image processing and computer vision algorithms in embedded systems

    OpenAIRE

    Fykse, Egil

    2013-01-01

    The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...

  8. High electrochemical performance of RuO_2–Fe_2O_3 nanoparticles embedded ordered mesoporous carbon as a supercapacitor electrode material

    International Nuclear Information System (INIS)

    Xiang, Dong; Yin, Longwei; Wang, Chenxiang; Zhang, Luyuan

    2016-01-01

    The electrode materials RuO_2 or RuO_2–Fe_2O_3 nanoparticle embedded OMC (ordered mesoporous carbon) are prepared by the method of impregnation and heating in situ. The mesoporous structure optimized the electron and proton conducting pathways, leading to the enhanced capacitive performances of the composite materials. The average nanoparticle size of RuO_2 and RuO_2–Fe_2O_3 is 2.54 and 1.96 nm, respectively. The fine RuO_2–Fe_2O_3 nanoparticles are dispersed evenly in the pore channel wall of the two-dimensional mesoporous carbon without blocking the mesoporous channel, and they have a higher specific surface area, a larger pore volume, a proper pore size and a small charge transfer impedance value. The special electrochemical capacitance of RuO_2–Fe_2O_3/OMC tested in acid electrolyte (H_2SO_4) is measured to be as high as 1668 F g"−"1, which is higher than that of RuO_2/OMC. Meanwhile, the supercapacitor properties of the RuO_2–Fe_2O_3/OMC composites show a good cycling performance of 93% capacitance retention (3000 cycles), a better reversibility, a higher energy density (134 Wh kg"−"1) and power density (4000 W kg"−"1). The composite electrode of RuO_2–Fe_2O_3/OMC, which combines a double layer capacitance with pseudo-capacitance, is proved to be suitable for ideal high performance electrode material of a hybrid supercapacitor application. - Highlights: • The nanocomposites of RuO_2–Fe_2O_3/OMC are prepared by impregnation and heating in situ. • The fine RuO_2–Fe_2O_3 nanoparticles distribute in the pore channel wall of OMC. • We discuss a reversible redox reaction mechanism of RuO_2–Fe_2O_3/OMC in acid solutions. • RuO_2–Fe_2O_3 nanoparticles embedded OMC shows a higher supercapacitive performance.

  9. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  10. Performance studies of four-dimensional cone beam computed tomography

    International Nuclear Information System (INIS)

    Qi Zhihua; Chen Guanghong

    2011-01-01

    Four-dimensional cone beam computed tomography (4DCBCT) has been proposed to characterize the breathing motion of tumors before radiotherapy treatment. However, when the acquired cone beam projection data are retrospectively gated into several respiratory phases, the available data to reconstruct each phase is under-sampled and thus causes streaking artifacts in the reconstructed images. To solve the under-sampling problem and improve image quality in 4DCBCT, various methods have been developed. This paper presents performance studies of three different 4DCBCT methods based on different reconstruction algorithms. The aims of this paper are to study (1) the relationship between the accuracy of the extracted motion trajectories and the data acquisition time of a 4DCBCT scan and (2) the relationship between the accuracy of the extracted motion trajectories and the number of phase bins used to sort projection data. These aims will be applied to three different 4DCBCT methods: conventional filtered backprojection reconstruction (FBP), FBP with McKinnon-Bates correction (MB) and prior image constrained compressed sensing (PICCS) reconstruction. A hybrid phantom consisting of realistic chest anatomy and a moving elliptical object with known 3D motion trajectories was constructed by superimposing the analytical projection data of the moving object to the simulated projection data from a chest CT volume dataset. CBCT scans with gantry rotation times from 1 to 4 min were simulated, and the generated projection data were sorted into 5, 10 and 20 phase bins before different methods were used to reconstruct 4D images. The motion trajectories of the moving object were extracted using a fast free-form deformable registration algorithm. The root mean square errors (RMSE) of the extracted motion trajectories were evaluated for all simulated cases to quantitatively study the performance. The results demonstrate (1) longer acquisition times result in more accurate motion delineation

  11. Research on Face Recognition Based on Embedded System

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Because a number of image feature data to store, complex calculation to execute during the face recognition, therefore the face recognition process was realized only by PCs with high performance. In this paper, the OpenCV facial Haar-like features were used to identify face region; the Principal Component Analysis (PCA was employed in quick extraction of face features and the Euclidean Distance was also adopted in face recognition; as thus, data amount and computational complexity would be reduced effectively in face recognition, and the face recognition could be carried out on embedded platform. Finally, based on Tiny6410 embedded platform, a set of embedded face recognition systems was constructed. The test results showed that the system has stable operation and high recognition rate can be used in portable and mobile identification and authentication.

  12. Unsteady Flame Embedding

    KAUST Repository

    El-Asrag, Hossam A.

    2011-01-01

    Direct simulation of all the length and time scales relevant to practical combustion processes is computationally prohibitive. When combustion processes are driven by reaction and transport phenomena occurring at the unresolved scales of a numerical simulation, one must introduce a dynamic subgrid model that accounts for the multiscale nature of the problem using information available on a resolvable grid. Here, we discuss a model that captures unsteady flow-flame interactions- including extinction, re-ignition, and history effects-via embedded simulations at the subgrid level. The model efficiently accounts for subgrid flame structure and incorporates detailed chemistry and transport, allowing more accurate prediction of the stretch effect and the heat release. In this chapter we first review the work done in the past thirty years to develop the flame embedding concept. Next we present a formulation for the same concept that is compatible with Large Eddy Simulation in the flamelet regimes. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames, similar to the flamelet approach. However, a set of elemental one-dimensional flames is used to describe the turbulent flame structure directly at the subgrid level. The calculations employ a one-dimensional unsteady flame model that incorporates unsteady strain rate, curvature, and mixture boundary conditions imposed by the resolved scales. The model is used for closure of the subgrid terms in the context of large eddy simulation. Direct numerical simulation (DNS) data from a flame-vortex interaction problem is used for comparison. © Springer Science+Business Media B.V. 2011.

  13. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  14. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  15. Synthesis of Fault-Tolerant Schedules with Transparency/Performance Trade-offs for Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2006-01-01

    of the application. We propose a novel algorithm for the synthesis of fault-tolerant schedules that can handle the transparency/performance trade-offs imposed by the designer, and makes use of the fault-occurrence information to reduce the overhead due to fault tolerance. We model the application as a conditional...... process graph, where the fault occurrence information is represented as conditional edges and the transparent recovery is captured using synchronization nodes....... such that the operation of other processes is not affected, we call it transparent recovery. Although transparent recovery has the advantages of fault containment, improved debugability and less memory needed to store the fault-tolerant schedules, it will introduce delays that can violate the timing constraints...

  16. The European computer model for optronic system performance prediction (ECOMOS)

    NARCIS (Netherlands)

    Kessler, S.; Bijl, P.; Labarre, L.; Repasi, E.; Wittenstein, W.; Bürsing, H.

    2017-01-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The

  17. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  18. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  20. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  1. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  2. Ultra-fine CuO Nanoparticles Embedded in Three-dimensional Graphene Network Nano-structure for High-performance Flexible Supercapacitors

    International Nuclear Information System (INIS)

    Li, Yanrong; Wang, Xue; Yang, Qi; Javed, Muhammad Sufyan; Liu, Qipeng; Xu, Weina; Hu, Chenguo; Wei, Dapeng

    2017-01-01

    High conductivity, large specific surface area and excellent performance redox materials are urgently desired for improving electrochemical energy storage. However, with single redox material it is hard to achieve these properties. Herein, we develop ultra-fine CuO nanoparticles embedded in three-dimensional graphene network grown on carbon cloth (CuO/3DGN/CC) to construct a novel electrode material with advantages of high conductivity, large specific area and excellent redox activity for supercapacitor application. The CuO/3DGN/CC with different CuO mass ratios are utilized to fabricate supercapacitors and the optimized mass loading achieves the high areal capacitance of 2787 mF cm"−"2 and specific capacitance of 1539.8 F g"−"1 at current density of 6 mA cm"−"2 with good stability. In addition, a high-flexible solid-state symmetric supercapacitor is also fabricated by using this CuO/3DGN/CC composite. The device shows excellent electrochemical performance even at various bending angles indicating a promising application for wearable electronic devices, and two devices with area 2 × 4 cm"2 in series can light nine light emitting diodes for more than 3 minutes.

  3. Alumina-coated and manganese monoxide embedded 3D carbon derived from avocado as high-performance anode for lithium-ion batteries

    Science.gov (United States)

    rehman, Wasif ur; Xu, Youlong; Du, Xianfeng; Sun, Xiaofei; Ullah, Inam; Zhang, Yuan; Jin, Yanling; Zhang, Baofeng; Li, Xifei

    2018-07-01

    Derived from avocado fruit, a three dimension (3D) carbon is prepared via a hydrothermal/pyrolysis process followed by embedding with MnO nanoparticles by a wet chemical method and coating with Al2O3 through an atomic layer deposition technique. The obtained material presents a hierarchical structure that MnO nanocrystals wrapped in 3D carbon and then encapsulated in a uniform Al2O3 layer with a thickness of about 5 nm. Benefiting from this hierarchical structure in which 3D carbon offers numerous electronic pathways to enhance the conductivity and Al2O3 nanolayer provide a shelter to keep away from dissolution of Mn4+ and volume changes during charge/discharge process. This material (marked as C/MnO@Al2O3) has exhibited high rate performance and excellent cyclability as an anode for lithium ion batteries. A high specific capacity of about 600 mA h g-1 is achieved at a current density of 1000 mA g-1 and the electrode can still deliver a high specific capacity of about 1165 mA h g-1 at 150 mA g-1 after 100 cycles. These results facilitate a green and high potential of anode materials towards promising devices for advance performance of lithium-ion batteries.

  4. The Effects of Embedded Question Type and Locus of Control on Processing Depth, Knowledge Gain, and Attitude Change in a Computer-Based Interactive Video Environment

    OpenAIRE

    Mitchell, Michael W.

    1997-01-01

    The differential effectiveness of two types of adjunct embedded questions in facilitating deep processing, increased knowledge gain, and increased positive attitude change was examined in this two-session laboratory study. In session one, subjects completed a measure of locus of control (LOC) orientation, as well as measures of pretest knowledge and attitudes regarding drinking. Two weeks later, stratified assignment was used to place 33 subjects (ages 12 to 15...

  5. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  6. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  7. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  8. Spying on real-time computers to improve performance

    International Nuclear Information System (INIS)

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  9. 76 FR 60939 - Metal Fatigue Analysis Performed by Computer Software

    Science.gov (United States)

    2011-09-30

    ... Software AGENCY: Nuclear Regulatory Commission. ACTION: Regulatory issue summary; request for comment... computer software package, WESTEMS TM , to demonstrate compliance with Section III, ``Rules for... Software Addressees All holders of, and applicants for, a power reactor operating license or construction...

  10. Benchmark Numerical Toolkits for High Performance Computing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  11. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... Abstract. This work developed and simulated a mathematical model for a mobile wireless computational Grid ... which mobile modes will process the tasks .... evaluation are analytical modelling, simulation ... MATLAB 7.10.0.

  12. Field tests on partial embedment effects (embedment effect tests on soil-structure interaction)

    International Nuclear Information System (INIS)

    Kurimoto, O.; Tsunoda, T.; Inoue, T.; Izumi, M.; Kusakabe, K.; Akino, K.

    1993-01-01

    A series of Model Tests of Embedment Effect on Reactor Buildings has been carried out by the Nuclear Power Engineering Corporation (NUPEC), under the sponsorship of the Ministry of International Trade and lndustry (MITI) of Japan. The nuclear reactor buildings are partially embedded due to conditions for the construction or building arrangement in Japan. It is necessary to verify the partial embedment effects by experiments and analytical studies in order to incorporate the effects in the seismic design. Forced vibration tests, therefore, were performed using a model with several types of embedment. Correlated simulation analyses were also performed and the characteristics of partial embedment effects on soil-structure interaction were evaluated. (author)

  13. Introducing remarks upon the analysis of computer systems performance

    International Nuclear Information System (INIS)

    Baum, D.

    1980-05-01

    Some of the basis ideas of analytical techniques to study the behaviour of computer systems are presented. Single systems as well as networks of computers are viewed as stochastic dynamical systems which may be modelled by queueing networks. Therefore this report primarily serves as an introduction to probabilistic methods for qualitative analysis of systems. It is supplemented by an application example of Chandy's collapsing method. (orig.) [de

  14. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  15. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  16. A study of kinematic cues and anticipatory performance in tennis using computational manipulation and computer graphics.

    Science.gov (United States)

    Ida, Hirofumi; Fukuhara, Kazunobu; Kusubori, Seiji; Ishii, Motonobu

    2011-09-01

    Computer graphics of digital human models can be used to display human motions as visual stimuli. This study presents our technique for manipulating human motion with a forward kinematics calculation without violating anatomical constraints. A motion modulation of the upper extremity was conducted by proportionally modulating the anatomical joint angular velocity calculated by motion analysis. The effect of this manipulation was examined in a tennis situation--that is, the receiver's performance of predicting ball direction when viewing a digital model of the server's motion derived by modulating the angular velocities of the forearm or that of the elbow during the forward swing. The results showed that the faster the server's forearm pronated, the more the receiver's anticipation of the ball direction tended to the left side of the serve box. In contrast, the faster the server's elbow extended, the more the receiver's anticipation of the ball direction tended to the right. This suggests that tennis players are sensitive to the motion modulation of their opponent's racket-arm.

  17. Bimetallic CoNiSx nanocrystallites embedded in nitrogen-doped carbon anchored on reduced graphene oxide for high-performance supercapacitors.

    Science.gov (United States)

    Chen, Qidi; Miao, Jinkang; Quan, Liang; Cai, Daoping; Zhan, Hongbing

    2018-02-22

    Exploring high-performance and low-priced electrode materials for supercapacitors is important but remains challenging. In this work, a unique sandwich-like nanocomposite of reduced graphene oxide (rGO)-supported N-doped carbon embedded with ultrasmall CoNiS x nanocrystallites (rGO/CoNiS x /N-C nanocomposite) has been successfully designed and synthesized by a simple one-step carbonization/sulfurization treatment of the rGO/Co-Ni precursor. The intriguing structural/compositional/morphological advantages endow the as-synthesized rGO/CoNiS x /N-C nanocomposite with excellent electrochemical performance as an advanced electrode material for supercapacitors. Compared with the other two rGO/CoNiO x and rGO/CoNiS x nanocomposites, the rGO/CoNiS x /N-C nanocomposite exhibits much enhanced performance, including a high specific capacitance (1028.2 F g -1 at 1 A g -1 ), excellent rate capability (89.3% capacitance retention at 10 A g -1 ) and good cycling stability (93.6% capacitance retention over 2000 cycles). In addition, an asymmetric supercapacitor (ASC) device based on the rGO/CoNiS x /N-C nanocomposite as the cathode and activated carbon (AC) as the anode is also fabricated, which can deliver a high energy density of 32.9 W h kg -1 at a power density of 229.2 W kg -1 with desirable cycling stability. These electrochemical results evidently indicate the great potential of the sandwich-like rGO/CoNiS x /N-C nanocomposite for applications in high-performance supercapacitors.

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  19. On the impact of quantum computing technology on future developments in high-performance scientific computing

    OpenAIRE

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  20. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  1. A Real-Time Embedded Control System for Electro-Fused Magnesia Furnace

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-01-01

    Full Text Available Since smelting process of electro-fused magnesia furnace is a complicated process which has characteristics like complex operation conditions, strong nonlinearities, and strong couplings, traditional linear controller cannot control it very well. Advanced intelligent control strategy is a good solution to this kind of industrial process. However, advanced intelligent control strategy always involves huge programming task and hard debugging and maintaining problems. In this paper, a real-time embedded control system is proposed for the process control of electro-fused magnesia furnace based on intelligent control strategy and model-based design technology. As for hardware, an embedded controller based on an industrial Single Board Computer (SBC is developed to meet industrial field environment demands. As for software, a Linux based on Real-Time Application Interface (RTAI is used as the real-time kernel of the controller to improve its real-time performance. The embedded software platform is also modified to support generating embedded code automatically from Simulink/Stateflow models. Based on the proposed embedded control system, the intelligent embedded control software of electro-fused magnesium furnace can be directly generated from Simulink/Stateflow models. To validate the effectiveness of the proposed embedded control system, hardware-in-the-loop (HIL and industrial field experiments are both implemented. Experiments results show that the embedded control system works very well in both laboratory and industry environments.

  2. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  3. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  4. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  5. Computer Access and Computer Use for Science Performance of Racial and Linguistic Minority Students

    Science.gov (United States)

    Chang, Mido; Kim, Sunha

    2009-01-01

    This study examined the effects of computer access and computer use on the science achievement of elementary school students, with focused attention on the effects for racial and linguistic minority students. The study used the Early Childhood Longitudinal Study (ECLS-K) database and conducted statistical analyses with proper weights and…

  6. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  7. Honeycomb-inspired design of ultrafine SnO2@C nanospheres embedded in carbon film as anode materials for high performance lithium- and sodium-ion battery

    Science.gov (United States)

    Ao, Xiang; Jiang, Jianjun; Ruan, Yunjun; Li, Zhishan; Zhang, Yi; Sun, Jianwu; Wang, Chundong

    2017-08-01

    Tin oxide (SnO2) has been considered as one of the most promising anodes for advanced rechargeable batteries due to its advantages such as high energy density, earth abundance and environmental friendly. However, its large volume change during the Li-Sn/Na-Sn alloying and de-alloying processes will result in a fast capacity degradation over a long term cycling. To solve this issue, in this work we design and synthesize a novel honeycomb-like composite composing of carbon encapsulated SnO2 nanospheres embedded in carbon film by using dual templates of SiO2 and NaCl. Using these composites as anodes both in lithium ion batteries and sodium-ion batteries, no discernable capacity degradation is observed over hundreds of long term cycles at both low current density (100 mA g-1) and high current density (500 mA g-1). Such a good cyclic stability and high delivered capacity have been attributed to the high conductivity of the supported carbon film and hollow encapsulated carbon shells, which not only provide enough space to accommodate the volume expansion but also prevent further aggregation of SnO2 nanoparticles upon cycling. By engineering electrodes of accommodating high volume expansion, we demonstrate a prototype to achieve high performance batteries, especially high-power batteries.

  8. Co{sub 3}O{sub 4} nanoparticles embedded in ordered mesoporous carbon with enhanced performance as an anode material for Li-ion batteries

    Energy Technology Data Exchange (ETDEWEB)

    Park, Junsu; Kim, Gil-Pyo [Seoul National University (SNU), World Class University (WCU) Program of Chemical Convergence for Energy and Environment C2E2, School of Chemical and Biological Engineering, College of Engineering, Institute of Chemical Processes (Korea, Republic of); Umh, Ha Nee [Kwangwoon University, Department of Chemical Engineering (Korea, Republic of); Nam, Inho; Park, Soomin [Seoul National University (SNU), World Class University (WCU) Program of Chemical Convergence for Energy and Environment C2E2, School of Chemical and Biological Engineering, College of Engineering, Institute of Chemical Processes (Korea, Republic of); Kim, Younghun [Kwangwoon University, Department of Chemical Engineering (Korea, Republic of); Yi, Jongheop, E-mail: jyi@snu.ac.kr [Seoul National University (SNU), World Class University (WCU) Program of Chemical Convergence for Energy and Environment C2E2, School of Chemical and Biological Engineering, College of Engineering, Institute of Chemical Processes (Korea, Republic of)

    2013-09-15

    A Co{sub 3}O{sub 4}/ordered mesoporous carbon (OMC) nanocomposite, in which Co{sub 3}O{sub 4} nanoparticles (NPs), with an average size of about 10 nm homogeneously embedded in the OMC framework, are prepared for use as an anode material in Li-ion batteries. The composite is prepared by a one-pot synthesis based on the solvent evaporation-induced co-self-assembly of a phenolic resol, a triblock copolymer F127, and Co(NO{sub 3}){sub 2}{center_dot}6H{sub 2}O, followed by carbonization and oxidation. The resulting material has a high reversible capacity of {approx}1,025 mA h g{sup -1} after 100 cycles at a current density of 0.1 A g{sup -1}. The enhanced cycling stability and rate capability of the composite can be attributed to the combined mesoporous nanostructure which provides efficient pathways for Li-ion transport and the homogeneous distribution of the Co{sub 3}O{sub 4} NPs in the pore wall of the OMC, which prevents aggregation. These findings suggest that the OMC has promise for use as a carbon metric for metals and metal oxides as an anode material in high performance Li-ion batteries.

  9. Automated procedure for performing computer security risk analysis

    International Nuclear Information System (INIS)

    Smith, S.T.; Lim, J.J.

    1984-05-01

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  10. Validation and computing and performance studies for the ATLAS simulation

    CERN Document Server

    Marshall, Z; The ATLAS collaboration

    2009-01-01

    We present the validation of the ATLAS simulation software pro ject. Software development is controlled by nightly builds and several levels of automatic tests to ensure stability. Computing validation, including CPU time, memory, and disk space required per event, is benchmarked for all software releases. Several different physics processes and event types are checked to thoroughly test all aspects of the detector simulation. The robustness of the simulation software is demonstrated by the production of 500 million events on the World-wide LHC Computing Grid in the last year.

  11. Performance predictions for solar-chemical convertors by computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luttmer, J.D.; Trachtenberg, I.

    1985-08-01

    A computer model which simulates the operation of Texas Instruments solar-chemical convertor (SCC) was developed. The model allows optimization of SCC processes, material, and configuration by facilitating decisions on tradeoffs among ease of manufacturing, power conversion efficiency, and cost effectiveness. The model includes various algorithms which define the electrical, electrochemical, and resistance parameters and which describ the operation of the discrete components of the SCC. Results of the model which depict the effect of material and geometric changes on various parameters are presented. The computer-calculated operation is compared with experimentall observed hydrobromic acid electrolysis rates.

  12. An embedded formula of the Chebyshev collocation method for stiff problems

    Science.gov (United States)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  13. Design and Implementation of an Embedded NIOS II System for JPEG2000 Tier II Encoding

    Directory of Open Access Journals (Sweden)

    John M. McNichols

    2013-01-01

    Full Text Available This paper presents a novel implementation of the JPEG2000 standard as a system on a chip (SoC. While most of the research in this field centers on acceleration of the EBCOT Tier I encoder, this work focuses on an embedded solution for EBCOT Tier II. Specifically, this paper proposes using an embedded softcore processor to perform Tier II processing as the back end of an encoding pipeline. The Altera NIOS II processor is chosen for the implementation and is coupled with existing embedded processing modules to realize a fully embedded JPEG2000 encoder. The design is synthesized on a Stratix IV FPGA and is shown to out perform other comparable SoC implementations by 39% in computation time.

  14. Polymorphic Embedding of DSLs

    DEFF Research Database (Denmark)

    Hofer, Christian; Ostermann, Klaus; Rendel, Tillmann

    2008-01-01

    propose polymorphic embedding of DSLs, where many different interpretations of a DSL can be provided as reusable components, and show how polymorphic embedding can be realized in the programming language Scala. With polymorphic embedding, the static type-safety, modularity, composability and rapid...

  15. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  16. Performance comparison between Java and JNI for optimal implementation of computational micro-kernels

    OpenAIRE

    Halli , Nassim; Charles , Henri-Pierre; Méhaut , Jean-François

    2015-01-01

    International audience; General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits ...

  17. Performance of Cloud Computing Centers with Multiple Priority Classes

    NARCIS (Netherlands)

    Ellens, W.; Zivkovic, Miroslav; Akkerboom, J.; Litjens, R.; van den Berg, Hans Leo

    In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes

  18. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    and run your application. Users typically create or edit job scripts using a text editor such as vi Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes , which differ in the amount of memory and number of processor cores. The majority of the nodes have 24

  19. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  20. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  1. Computer-Supported Instruction in Enhancing the Performance of Dyscalculics

    Science.gov (United States)

    Kumar, S. Praveen; Raja, B. William Dharma

    2010-01-01

    The use of instructional media is an essential component of teaching-learning process which contributes to the efficiency as well as effectiveness of the teaching-learning process. Computer-supported instruction has a very important role to play as an advanced technological instruction as it employs different instructional techniques like…

  2. Cross functional organisational embedded system development

    OpenAIRE

    Lennon, Sophie

    2015-01-01

    peer-reviewed Embedded system development is continuing to grow. Medical, automotive and Internet of Things are just some of the market segments. There is a tight coupling between hardware and software when developing an embedded system, often needing to meet strict performance targets, standards requirements and aggressive schedules. Embedded software developers need to consider hardware requirements in far greater detail as they can have a significant impact on the quality and value of t...

  3. Compact Acoustic Models for Embedded Speech Recognition

    Directory of Open Access Journals (Sweden)

    Lévy Christophe

    2009-01-01

    Full Text Available Speech recognition applications are known to require a significant amount of resources. However, embedded speech recognition only authorizes few KB of memory, few MIPS, and small amount of training data. In order to fit the resource constraints of embedded applications, an approach based on a semicontinuous HMM system using state-independent acoustic modelling is proposed. A transformation is computed and applied to the global model in order to obtain each HMM state-dependent probability density functions, authorizing to store only the transformation parameters. This approach is evaluated on two tasks: digit and voice-command recognition. A fast adaptation technique of acoustic models is also proposed. In order to significantly reduce computational costs, the adaptation is performed only on the global model (using related speaker recognition adaptation techniques with no need for state-dependent data. The whole approach results in a relative gain of more than 20% compared to a basic HMM-based system fitting the constraints.

  4. An Embedded Reconfigurable Logic Module

    Science.gov (United States)

    Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)

    2002-01-01

    A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.

  5. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  7. Performing a local reduction operation on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  8. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  9. Embedded multiprocessors scheduling and synchronization

    CERN Document Server

    Sriram, Sundararajan

    2009-01-01

    Techniques for Optimizing Multiprocessor Implementations of Signal Processing ApplicationsAn indispensable component of the information age, signal processing is embedded in a variety of consumer devices, including cell phones and digital television, as well as in communication infrastructure, such as media servers and cellular base stations. Multiple programmable processors, along with custom hardware running in parallel, are needed to achieve the computation throughput required of such applications. Reviews important research in key areas related to the multiprocessor implementation of multi

  10. Corrosion Monitors for Embedded Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, Alex L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pfeifer, Kent B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Casias, Adrian L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Howell, Stephen W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sorensen, Neil R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Missert, Nancy A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    We have developed and characterized novel in-situ corrosion sensors to monitor and quantify the corrosive potential and history of localized environments. Embedded corrosion sensors can provide information to aid health assessments of internal electrical components including connectors, microelectronics, wires, and other susceptible parts. When combined with other data (e.g. temperature and humidity), theory, and computational simulation, the reliability of monitored systems can be predicted with higher fidelity.

  11. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  12. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  13. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  14. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  15. Gender Differences in Attitudes toward Computers and Performance in the Accounting Information Systems Class

    Science.gov (United States)

    Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi

    2010-01-01

    Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…

  16. An urban energy performance evaluation system and its computer implementation.

    Science.gov (United States)

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-12-15

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction

    Science.gov (United States)

    Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.

    2017-08-01

    Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional

  18. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction.

    Science.gov (United States)

    Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S

    2017-08-01

    The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for

  19. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    Science.gov (United States)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  20. High Performance Computing (HPC) Challenge (HPCC) Benchmark Suite Development

    National Research Council Canada - National Science Library

    Dongarra, J. J

    2005-01-01

    .... The applications of performance modeling are numerous, including evaluation of algorithms, optimization of code implementation, parallel library development, and comparison of system architectures...

  1. Constraint Embedding for Vehicle Suspension Dynamics

    Directory of Open Access Journals (Sweden)

    Jain Abhinandan

    2016-06-01

    Full Text Available The goal of this research is to achieve close to real-time dynamics performance for allowing auto-pilot in-the-loop testing of unmanned ground vehicles (UGV for urban as well as off-road scenarios. The overall vehicle dynamics performance is governed by the multibody dynamics model for the vehicle, the wheel/terrain interaction dynamics and the onboard control system. The topic of this paper is the development of computationally efficient and accurate dynamics model for ground vehicles with complex suspension dynamics. A challenge is that typical vehicle suspensions involve closed-chain loops which require expensive DAE integration techniques. In this paper, we illustrate the use the alternative constraint embedding technique to reduce the cost and improve the accuracy of the dynamics model for the vehicle.

  2. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  3. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  4. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  5. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  6. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  7. Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing speed scores as measures of noncredible responding: The third generation of embedded performance validity indicators.

    Science.gov (United States)

    Erdodi, Laszlo A; Abeare, Christopher A; Lichtenstein, Jonathan D; Tyson, Bradley T; Kucharski, Brittany; Zuccato, Brandon G; Roth, Robert M

    2017-02-01

    Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Design of massively parallel hardware multi-processors for highly-demanding embedded applications

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2013-01-01

    Many new embedded applications require complex computations to be performed to tight schedules, while at the same time demanding low energy consumption and low cost. For implementation of these highly-demanding applications, highly-optimized application-specific multi-processor system-on-a-chip

  9. Computational Modeling of Human Multiple-Task Performance

    National Research Council Canada - National Science Library

    Kieras, David E; Meyer, David

    2005-01-01

    This is the final report for a project that was a continuation of an earlier, long-term project on the development and validation of the EPIC cognitive architecture for modeling human cognition and performance...

  10. Performance Evaluation of the Myrias SPS-2 Computer

    National Research Council Canada - National Science Library

    McBryan, Oliver A; Pozo, Roldan

    1990-01-01

    .... The highlight of the software environment is the virtual shared memory environment. We have analyzed the performance of the shared memory environment by studying system efficiency as a function of both number of processors in use and of paging activity...

  11. Multithreading for Embedded Reconfigurable Multicore Systems

    NARCIS (Netherlands)

    Zaykov, P.G.

    2014-01-01

    In this dissertation, we address the problem of performance efficient multithreading execution on heterogeneous multicore embedded systems. By heterogeneous multicore embedded systems we refer to those, which have real-time requirements and consist of processor tiles with General Purpose Processor

  12. Multithreading for embedded reconfigurable multicore systems

    NARCIS (Netherlands)

    Zaykov, P.G.

    2014-01-01

    In this dissertation, we address the problem of performance efficient multithreading execution on heterogeneous multicore embedded systems. By heterogeneous multicore embedded systems we refer to those, which have real-time requirements and consist of processor tiles with General Purpose Processor

  13. Embedded sensor systems

    CERN Document Server

    Agrawal, Dharma Prakash

    2017-01-01

    This inspiring textbook provides an introduction to wireless technologies for sensors, explores potential use of sensors for numerous applications, and utilizes probability theory and mathematical methods as a means of embedding sensors in system design. It discusses the need for synchronization and underlying limitations, inter-relation between given coverage and connectivity to number of sensors needed, and the use of geometrical distance to determine location of the base station for data collection and explore use of anchor nodes for relative position determination of sensors. The book explores energy conservation, communication using TCP, the need for clustering and data aggregation, and residual energy determination and energy harvesting. It covers key topics of sensor communication like mobile base stations and relay nodes, delay-tolerant sensor networks, and remote sensing and possible applications. The book defines routing methods and do performance evaluation for random and regular sensor topology an...

  14. Computer analysis of sodium cold trap design and performance

    International Nuclear Information System (INIS)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na 2 O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions

  15. Computational study of performance characteristics for truncated conical aerospike nozzles

    Science.gov (United States)

    Nair, Prasanth P.; Suryan, Abhilash; Kim, Heuy Dong

    2017-12-01

    Aerospike nozzles are advanced rocket nozzles that can maintain its aerodynamic efficiency over a wide range of altitudes. It belongs to class of altitude compensating nozzles. A vehicle with an aerospike nozzle uses less fuel at low altitudes due to its altitude adaptability, where most missions have the greatest need for thrust. Aerospike nozzles are better suited to Single Stage to Orbit (SSTO) missions compared to conventional nozzles. In the current study, the flow through 20% and 40% aerospike nozzle is analyzed in detail using computational fluid dynamics technique. Steady state analysis with implicit formulation is carried out. Reynolds averaged Navier-Stokes equations are solved with the Spalart-Allmaras turbulence model. The results are compared with experimental results from previous work. The transition from open wake to closed wake happens in lower Nozzle Pressure Ratio for 20% as compared to 40% aerospike nozzle.

  16. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  17. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  18. A Resource-Aware Component Model for Embedded Systems

    OpenAIRE

    Vulgarakis, Aneta

    2009-01-01

    Embedded systems are microprocessor-based systems that cover a large range of computer systems from ultra small computer-based devices to large systems monitoring and controlling complex processes. The particular constraints that must be met by embedded systems, such as timeliness, resource-use efficiency, short time-to-market and low cost, coupled with the increasing complexity of embedded system software, demand technologies and processes that will tackle these issues. An attractive approac...

  19. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    of concurrency and data access. We investigate issues related to replica consistency, provide an operational semantics that guides the implementation of the language, and discuss the main synchronization mechanisms of our prototypical run-time framework. Finally, we provide a performance analysis, which includes...

  20. PAPIRUS - a computer code for FBR fuel performance analysis

    International Nuclear Information System (INIS)

    Kobayashi, Y.; Tsuboi, Y.; Sogame, M.

    1991-01-01

    The FBR fuel performance analysis code PAPIRUS has been developed to design fuels for demonstration and future commercial reactors. A pellet structural model was developed to describe the generation, depletion and transport of vacancies and atomic elements in unified fashion. PAPIRUS results in comparison with the power - to - melt test data from HEDL showed validity of the code at the initial reactor startup. (author)