WorldWideScience

Sample records for advanced computer architectures

  1. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  2. Advanced Computing Architectures for Cognitive Processing

    Science.gov (United States)

    2009-07-01

    customized datapath elements, encryption circuits optimized for specific keys, string matching circuits for publish/subscribe computations or...parallel datapaths , RC implementations can concurrently search various paths for determining likely meanings or predictions for text strings. This...signal processing applications, with the ability to relatively easily build a pipelined datapath optimized for the specific application needs. For this

  3. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    Science.gov (United States)

    Kazakov, Artem; Furukawa, Kazuro

    2010-11-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  4. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    Science.gov (United States)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  5. Advanced Architectures for Astrophysical Supercomputing

    CERN Document Server

    Barsdell, Benjamin R; Fluke, Christopher J

    2010-01-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of $O(100\\times)$ in general-purpose computation -- performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  6. Computer architecture technology trends

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. This year's edition of Computer Architecture Technology Trends analyses the trends which are taking place in the architecture of computing systems today. Due to the sheer number of different applications to which computers are being applied, there seems no end to the different adoptions which proliferate. There are, however, some underlying trends which appear. Decision makers should be aware of these trends when specifying architectures, particularly for future applications. This report is fully revised and updated and provides insight in

  7. Advanced router architectures

    CERN Document Server

    Kloth, Axel K

    2005-01-01

    Routers, switches, and transmission equipment form the backbone of the Internet, yet many users and service technicians do not understand how these nodes really work.Advanced Router Architectures addresses how components of advanced routers work together and how they are integrated with each other. This book provides the background behind why these building blocks perform certain functions, and how the function is implemented in general use. It offers an introduction to the subject matter that is intended to trigger deeper interest from the reader. The book explains, for example, why traffic m

  8. The Spin Torque Lego - from spin torque nano-devices to advanced computing architectures

    Science.gov (United States)

    Grollier, Julie

    2013-03-01

    Spin transfer torque (STT), predicted in 1996, and first observed around 2000, brought spintronic devices to the realm of active elements. A whole class of new devices, based on the combined effects of STT for writing and Giant Magneto-Resistance or Tunnel Magneto-Resistance for reading has emerged. The second generation of MRAMs, based on spin torque writing : the STT-RAM, is under industrial development and should be out on the market in three years. But spin torque devices are not limited to binary memories. We will rapidly present how the spin torque effect also allows to implement non-linear nano-oscillators, spin-wave emitters, controlled stochastic devices and microwave nano-detectors. What is extremely interesting is that all these functionalities can be obtained using the same materials, the exact same stack, simply by changing the device geometry and its bias conditions. So these different devices can be seen as Lego bricks, each brick with its own functionality. During this talk, I will show how spin torque can be engineered to build new bricks, such as the Spintronic Memristor, an artificial magnetic nano-synapse. I will then give hints on how to assemble these bricks in order to build novel types of computing architectures, with a special focus on neuromorphic circuits. Financial support by the European Research Council Starting Grant NanoBrain (ERC 2010 Stg 259068) is acknowledged.

  9. Programming methodology and performance issues for advanced computer architectures. [Linear algebra

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.; Connolly, K.; Patterson, J.

    1987-01-01

    This paper will describe some recent attempts to construct transportable numerical software for high performance computers. Restructuring algorithms in terms of simple linear algebra modules is reviewed. This technique has proved very successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The use of modules to encapsulate parallelism and reduce the ratio of data movement to floating point operations has been demonstrably effective for regular problems such as those found in dense linear algebra. In other situations it may be necessary to express explicitly parallel algorithms. We also present a programming methodology that is useful for constructing new parallel algorithms which require sophisticated synchronization at a large grain level. We describe the SCHEDULE package which provides an environment for developing and analyzing explicitly parallel programs in Fortran which aare portable. This package now includes a preprocessor to achieve complete portability of user level code and also a graphics post processor for performance analysis and debugging. We discuss details of porting both the SCHEDULE package and user code. Examples from linear algebra, and partial differential equations are used to illustrate the utility of this approach.

  10. CITAstudio: Computation in Architecture 2015

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ayres, Phil

    2016-01-01

    CITAstudio yearbook. CITAstudio: Computation in Architecture is a two year International Master's Programme at The Royal Danish Academy of Fine Arts, School of Architecture. With a focus on digital design and material fabrication the programme questions how computation is changing our spatial......, representational and material cultures. Through hands-on experimentation and production the programme emphasises learning-through-doing as a principle method for exploring computation as a means to pursue speculative design, experimental fabrication, material actuation and complex modelling....

  11. Digital design and computer architecture

    CERN Document Server

    Harris, David

    2010-01-01

    Digital Design and Computer Architecture is designed for courses that combine digital logic design with computer organization/architecture or that teach these subjects as a two-course sequence. Digital Design and Computer Architecture begins with a modern approach by rigorously covering the fundamentals of digital logic design and then introducing Hardware Description Languages (HDLs). Featuring examples of the two most widely-used HDLs, VHDL and Verilog, the first half of the text prepares the reader for what follows in the second: the design of a MIPS Processor. By the end of D

  12. Advanced image memory architecture

    Science.gov (United States)

    Vercillo, Richard; McNeill, Kevin M.

    1994-05-01

    A workstation for radiographic images, known as the Arizona Viewing Console (AVC), was developed at the University of Arizona Health Sciences Center in the Department of Radiology. This workstation has been in use as a research tool to aid us in investigating how a radiologist interacts with a workstation, to determine which image processing features are required to aid the radiologist, to develop user interfaces and to support psychophysical and clinical studies. Results from these studies have show a need to increase the current image memory's available storage in order to accommodate high resolution images. The current triple-ported image memory can be allocated to store any number of images up to a combined total of 4 million pixels. Over the past couple of years, higher resolution images have become easier to generate with the advent of laser digitizers and computed radiology systems. As part of our research, a larger 32 million pixel image memory for AVC has been designed to replace the existing image memory.

  13. Computing architecture for autonomous microgrids

    Science.gov (United States)

    Goldsmith, Steven Y.

    2015-09-29

    A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

  14. Computer organization, design, and architecture

    CERN Document Server

    Shiva, Sajjan G

    2007-01-01

    Suitable for a one- or two-semester undergraduate or beginning graduate course in computer science and computer engineering, Computer Organization, Design, and Architecture, Fourth Edition presents the operating principles, capabilities, and limitations of digital computers to enable development of complex yet efficient systems. With 40% updated material and four new chapters, this edition takes students through a solid, up-to-date exploration of single- and multiple-processor systems, embedded architectures, and performance evaluation. New to the Fourth Edition Additional material that cove

  15. Cloud computing for enterprise architectures

    CERN Document Server

    Mahmood, Zaigham

    2011-01-01

    This important text provides a single point of reference for state-of-the-art cloud computing design and implementation techniques. The book examines cloud computing from the perspective of enterprise architecture, asking the question; how do we realize new business potential with our existing enterprises? Its topics and features are: with a Foreword by Thomas Erl; contains contributions from an international selection of preeminent experts; presents the state-of-the-art in enterprise architecture approaches with respect to cloud computing models, frameworks, technologies, and applications; di

  16. Programmable architecture for quantum computing

    NARCIS (Netherlands)

    Chen, J.; Wang, L.; Charbon, E.; Wang, B.

    2013-01-01

    A programmable architecture called “quantum FPGA (field-programmable gate array)” (QFPGA) is presented for quantum computing, which is a hybrid model combining the advantages of the qubus system and the measurement-based quantum computation. There are two kinds of buses in QFPGA, the local bus and t

  17. Programmable architecture for quantum computing

    NARCIS (Netherlands)

    Chen, J.; Wang, L.; Charbon, E.; Wang, B.

    2013-01-01

    A programmable architecture called “quantum FPGA (field-programmable gate array)” (QFPGA) is presented for quantum computing, which is a hybrid model combining the advantages of the qubus system and the measurement-based quantum computation. There are two kinds of buses in QFPGA, the local bus and t

  18. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  19. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  20. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  1. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  2. Advances in unconventional computing

    CERN Document Server

    2017-01-01

    The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...

  3. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  4. Fault Tolerant Computer Architecture

    CERN Document Server

    Sorin, Daniel

    2009-01-01

    For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes

  5. Security Architecture of Cloud Computing

    Directory of Open Access Journals (Sweden)

    V.KRISHNA REDDY

    2011-09-01

    Full Text Available The Cloud Computing offers service over internet with dynamically scalable resources. Cloud Computing services provides benefits to the users in terms of cost and ease of use. Cloud Computing services need to address the security during the transmission of sensitive data and critical applications to shared and public cloud environments. The cloud environments are scaling large for data processing and storage needs. Cloud computing environment have various advantages as well as disadvantages on the data security of service consumers. This paper aims to emphasize the main security issues existing in cloud computing environments. The security issues at various levels of cloud computing environment is identified in this paper and categorized based on cloud computing architecture. This paper focuses on the usage of Cloud services and security issues to build these cross-domain Internet-connected collaborations.

  6. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  7. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  8. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  9. Computer architecture fundamentals and principles of computer design

    CERN Document Server

    Dumas II, Joseph D

    2005-01-01

    Introduction to Computer ArchitectureWhat is Computer Architecture?Architecture vs. ImplementationBrief History of Computer SystemsThe First GenerationThe Second GenerationThe Third GenerationThe Fourth GenerationModern Computers - The Fifth GenerationTypes of Computer SystemsSingle Processor SystemsParallel Processing SystemsSpecial ArchitecturesQuality of Computer SystemsGenerality and ApplicabilityEase of UseExpandabilityCompatibilityReliabilitySuccess and Failure of Computer Architectures and ImplementationsQuality and the Perception of QualityCost IssuesArchitectural Openness, Market Timi

  10. Computer organization, design, and architecture

    CERN Document Server

    Shiva, Sajjan G

    2013-01-01

    IntroductionComputer System OrganizationComputer EvolutionOrganization versus Design versus ArchitectureSummaryProblemsBibliographyNumber Systems and CodesNumber SystemsConversionArithmeticSign-Magnitude SystemComplement Number SystemFloating-Point NumbersBinary CodesData Storage and Register TransferRepresentation of Numbers, Arrays, and RecordsSummaryProblemsBibliographyCombinational LogicBasic Operations and TerminologyBoolean Algebra (Switching Algebra)Minimization of Boolean FunctionsPrimitive Hardware BlocksFunctional Analysis of Combinational CircuitsSynthesis of Combinational CircuitsS

  11. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.

    2011-06-03

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area, dealing with meshes with planar faces and meshes which allow multilayer constructions (which is related to discrete surfaces and their curvatures), triangles meshes with circle-packing properties (which is related to conformal uniformization), and with the paneling problem. We emphasize the combination of numerical optimization and geometric knowledge.

  12. Application of advanced electronics to a future spacecraft computer design

    Science.gov (United States)

    Carney, P. C.

    1980-01-01

    Advancements in hardware and software technology are summarized with specific emphasis on spacecraft computer capabilities. Available state of the art technology is reviewed and candidate architectures are defined.

  13. Advanced Computer Typography.

    Science.gov (United States)

    1981-12-01

    ADVANCED COMPUTER TYPOGRAPHY .(U) DEC 81 A V HERSHEY UNCLASSIFIED NPS012-81-005 M MEEEIEEEII IIUJIL15I.4 MICROCQP RE SO.JjI ON ft R NPS012-81-005...NAVAL POSTGRADUATE SCHOOL 0Monterey, California DTIC SELECTEWA APR 5 1982 B ADVANCED COMPUTER TYPOGRAPHY by A. V. HERSHEY December 1981 OApproved for...Subtitle) S. TYPE Or REPORT & PERIOD COVERED Final ADVANCED COMPUTER TYPOGRAPHY Dec 1979 - Dec 1981 S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) S CONTRACT

  14. 4th Conference on Advances in architectural geometry 2014

    CERN Document Server

    Knippers, Jan; Mitra, Niloy; Wang, Wenping

    2015-01-01

    This book contains 24 technical papers presented at the fourth edition of the Advances in Architectural Geometry conference, AAG 2014, held in London, England, September 2014. It offers engineers, mathematicians, designers, and contractors insight into the efficient design, analysis, and manufacture of complex shapes, which will help open up new horizons for architecture. The book examines geometric aspects involved in architectural design, ranging from initial conception to final fabrication. It focuses on four key topics: applied geometry, architecture, computational design, and also practice in the form of case studies. In addition, the book also features algorithms, proposed implementation, experimental results, and illustrations. Overall, the book presents both theoretical and practical work linked to new geometrical developments in architecture. It gathers the diverse components of the contemporary architectural tendencies that push the building envelope towards free form in order to respond to multiple...

  15. Advanced customization in architectural design and construction

    CERN Document Server

    Naboni, Roberto

    2015-01-01

    This book presents the state of the art in advanced customization within the sector of architectural design and construction, explaining important new technologies that are boosting design, product and process innovation and identifying the challenges to be confronted as we move toward a mass customization construction industry. Advanced machinery and software integration are discussed, as well as an overview of the manufacturing techniques offered through digital methods that are acquiring particular significance within the field of digital architecture. CNC machining, Robotic Fabrication, and Additive Manufacturing processes are all clearly explained, highlighting their ability to produce personalized architectural forms and unique construction components. Cutting-edge case studies in digitally fabricated architectural realizations are described and, looking towards the future, a new model of 100% customized architecture for design and construction is presented. The book is an excellent guide to the profoun...

  16. LUCA:Lightweight Ubiquitous Computing Architecture

    Institute of Scientific and Technical Information of China (English)

    SUN Dao-qing; CAO Qi-ying

    2009-01-01

    Lightweight ubiquitous computing security architecture was presented. Lots of our recent researches have been integrated in this architecture. And the main current researches in the related area have also been absorbed. The main attention of this paper was providing a compact and realizable method to apply ubiquitous computing into our daily lives under sufficient secure guarantee. At last, the personal intelligent assistant system was presented to show that this architecture was a suitable and realizable security mechanism in solving the ubiquitous computing problems.

  17. Advances in Computer Entertainment.

    NARCIS (Netherlands)

    Nijholt, Antinus; Romão, T.; Reidsma, Dennis; Unknown, [Unknown

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant

  18. Advances in Computer Entertainment.

    NARCIS (Netherlands)

    Nijholt, Antinus; Romão, T.; Reidsma, Dennis; Unknown, [Unknown

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant a

  19. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...... in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...... and frames are implemented. The developed procedures allow for the exploration of new territories in optimization of architectural structures, and offer new methodological strategies for bridging conceptual gaps between optimization and architectural practice....

  20. Brain architecture: A design for natural computation

    CERN Document Server

    Kaiser, Marcus

    2008-01-01

    Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.

  1. Architectural Advancements in RELAP5-3D

    Energy Technology Data Exchange (ETDEWEB)

    Dr. George L. Mesina

    2005-11-01

    As both the computer industry and field of nuclear science and engineering move forward, there is a need to improve the computing tools used in the nuclear industry to keep pace with these changes. By increasing the capability of the codes, the growing modeling needs of nuclear plant analysis will be met and advantage can be taken of more powerful computer languages and architecture. In the past eighteen months, improvements have been made to RELAP5-3D [1] for these reasons. These architectural advances include code restructuring, conversion to Fortran 90, high performance computing upgrades, and rewriting of the RELAP5 Graphical User Interface (RGUI) [2] and XMGR5 [3] in Java. These architectural changes will extend the lifetime of RELAP5-3D, reduce the costs for development and maintenance, and improve it speed and reliability.

  2. Computer programming and architecture the VAX

    CERN Document Server

    Levy, Henry

    2014-01-01

    Takes a unique systems approach to programming and architecture of the VAXUsing the VAX as a detailed example, the first half of this book offers a complete course in assembly language programming. The second describes higher-level systems issues in computer architecture. Highlights include the VAX assembler and debugger, other modern architectures such as RISCs, multiprocessing and parallel computing, microprogramming, caches and translation buffers, and an appendix on the Berkeley UNIX assembler.

  3. Memristor-based nanoelectronic computing circuits and architectures

    CERN Document Server

    Vourkas, Ioannis

    2016-01-01

    This book considers the design and development of nanoelectronic computing circuits, systems and architectures focusing particularly on memristors, which represent one of today’s latest technology breakthroughs in nanoelectronics. The book studies, explores, and addresses the related challenges and proposes solutions for the smooth transition from conventional circuit technologies to emerging computing memristive nanotechnologies. Its content spans from fundamental device modeling to emerging storage system architectures and novel circuit design methodologies, targeting advanced non-conventional analog/digital massively parallel computational structures. Several new results on memristor modeling, memristive interconnections, logic circuit design, memory circuit architectures, computer arithmetic systems, simulation software tools, and applications of memristors in computing are presented. High-density memristive data storage combined with memristive circuit-design paradigms and computational tools applied t...

  4. Advances in computers

    CERN Document Server

    Memon, Atif

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  5. Advances in computers

    CERN Document Server

    Memon, Atif

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  6. Advances in Computers

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 79 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today. Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that co

  7. Advanced control architecture for autonomous vehicles

    Science.gov (United States)

    Maurer, Markus; Dickmanns, Ernst D.

    1997-06-01

    An advanced control architecture for autonomous vehicles is presented. The hierarchical architecture consists of four levels: a vehicle level, a control level, a rule-based level and a knowledge-based level. A special focus is on forms of internal representation, which have to be chosen adequately for each level. The control scheme is applied to VaMP, a Mercedes passenger car which autonomously performs missions on German freeways. VaMP perceives the environment with its sense of vision and conventional sensors. It controls its actuators for locomotion and attention focusing. Modules for perception, cognition and action are discussed.

  8. Computational Architecture For Control Of Remote Manipulator

    Science.gov (United States)

    Szakaly, Zoltan F.

    1989-01-01

    Synchronization done by hardware to reduce software overhead. Computing resources located at both master-arm node and slave-arm node. This architecture provides for effective control while reducing computational burden on host computer and reducing and balancing load on communication channel.

  9. Lightweight Service Oriented Architecture for Pervasive Computing

    CERN Document Server

    Tigli, Jean-Yves; Rey, Gaetan; Hourdin, Vincent; Riveill, Michel

    2011-01-01

    Pervasive computing appears like a new computing era based on networks of objects and devices evolving in a real world, radically different from distributed computing, based on networks of computers and data storages. Contrary to most context-aware approaches, we work on the assumption that pervasive software must be able to deal with a dynamic software environment before processing contextual data. After demonstrating that SOA (Service oriented Architecture) and its numerous principles are well adapted for pervasive computing, we present our extended SOA model for pervasive computing, called Service Lightweight Component Architecture (SLCA). SLCA presents various additional principles to meet completely pervasive software constraints: software infrastructure based on services for devices, local orchestrations based on lightweight component architecture and finally encapsulation of those orchestrations into composite services to address distributed composition of services. We present a sample application of t...

  10. Advanced connection systems for architectural glazing

    CERN Document Server

    Afghani Khoraskani, Roham

    2015-01-01

    This book presents the findings of a detailed study to explore the behavior of architectural glazing systems during and after an earthquake and to develop design proposals that will mitigate or even eliminate the damage inflicted on these systems. The seismic behavior of common types of architectural glazing systems are investigated and causes of damage to each system, identified. Furthermore, depending on the geometrical and structural characteristics, the ultimate horizontal load capacity of glass curtain wall systems is defined based on the stability of the glass components. Detailed attention is devoted to the incorporation of advanced connection devices between the structure of the building and the building envelope system in order to minimize the damage to glazed components. An innovative new connection device is introduced that results in a delicate and functional system easily incorporated into different architectural glazing systems, including those demanding maximum transparency.

  11. Advanced pixel architectures for scientific image sensors

    CERN Document Server

    Coath, R; Godbeer, A; Wilson, M; Turchetta, R

    2009-01-01

    We present recent developments from two projects targeting advanced pixel architectures for scientific applications. Results are reported from FORTIS, a sensor demonstrating variants on a 4T pixel architecture. The variants include differences in pixel and diode size, the in-pixel source follower transistor size and the capacitance of the readout node to optimise for low noise and sensitivity to small amounts of charge. Results are also reported from TPAC, a complex pixel architecture with ~160 transistors per pixel. Both sensors were manufactured in the 0.18μm INMAPS process, which includes a special deep p-well layer and fabrication on a high resistivity epitaxial layer for improved charge collection efficiency.

  12. Field-programmable custom computing technology architectures, tools, and applications

    CERN Document Server

    Luk, Wayne; Pocek, Ken

    2000-01-01

    Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  13. Fundamentals of computer architecture and design

    CERN Document Server

    Bindal, Ahmet

    2017-01-01

    This textbook provides semester-length coverage of computer architecture and design, providing a strong foundation for students to understand modern computer system architecture and to apply these insights and principles to future computer designs.  It is based on the author’s decades of industrial experience with computer architecture and design, as well as with teaching students focused on pursuing careers in computer engineering.  Unlike a number of existing textbooks for this course, this one focuses not only on CPU architecture, but also covers in great detail in system buses, peripherals and memories.This book teaches every element in a computing system in two steps.  First, it introduces the functionality of each topic (and subtopics) and then goes into “from-scratch design” of a particular digital block from its architectural specifications using timing diagrams.  The author describes how the data-path of a certain digital block is generated using timin g diagrams, a method which most textbo...

  14. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  15. Formal Protection Architecture for Cloud Computing System

    Institute of Scientific and Technical Information of China (English)

    Yasha Chen; Jianpeng Zhao; Junmao Zhu; Fei Yan

    2014-01-01

    Cloud computing systems play a vital role in national securi-ty. This paper describes a conceptual framework called dual-system architecture for protecting computing environments. While attempting to be logical and rigorous, formalism meth-od is avoided and this paper chooses algebra Communication Sequential Process.

  16. Quantum computation architecture using optical tweezers

    DEFF Research Database (Denmark)

    Weitenberg, Christof; Kuhr, Stefan; Mølmer, Klaus;

    2011-01-01

    We present a complete architecture for scalable quantum computation with ultracold atoms in optical lattices using optical tweezers focused to the size of a lattice spacing. We discuss three different two-qubit gates based on local collisional interactions. The gates between arbitrary qubits...... quantum computing....

  17. Monte Carlo simulations on SIMD computer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Burmester, C.P.; Gronsky, R. [Lawrence Berkeley Lab., CA (United States); Wille, L.T. [Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  18. New computer architectures as tools for ecological thought.

    Science.gov (United States)

    Villa, F

    1992-06-01

    Recent achievements of computer science provide unrivaled power for the advancement of ecology. This power is not merely computational: parallel computers, having hierarchical organization as their architectural principle, also provide metaphors for understanding complex systems. In this sense they might play for a science of ecological complexity a role like equilibrium-based metaphors had in the development of dynamic systems ecology. Parallel computers provide this opportunity through an informational view of ecological reality and multilevel modelling paradigms. Spatial and individual-oriented models allow application and full understanding of the new metaphors in the ecological context.

  19. Layered Architecture for Quantum Computing

    National Research Council Canada - National Science Library

    Jones, N. Cody; Van Meter, Rodney; Fowler, Austin G; McMahon, Peter L; Kim, Jungsang; Ladd, Thaddeus D; Yamamoto, Yoshihisa

    2012-01-01

    .... We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface-code quantum error correction...

  20. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  1. A New System Architecture for Pervasive Computing

    CERN Document Server

    Ismail, Anis; Ismail, Ziad

    2011-01-01

    We present new system architecture, a distributed framework designed to support pervasive computing applications. We propose a new architecture consisting of a search engine and peripheral clients that addresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Key features of our application are a type-aware data transport that is capable of extract data, and present data through handheld devices (PDA (personal digital assistant), mobiles, etc). Pervasive computing uses web technology, portable devices, wireless communications and nomadic or ubiquitous computing systems. The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitous access. This can be implemented on a variety of devices - PDAs, laptops, information appliances such as digital cameras and printers. Mobile users get transparent access to resources outside their current environment. We discuss our system's architecture and its implementation. Through experimental...

  2. Architecture Design & Network Application of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mehzabul Hoque Nahid

    2015-08-01

    Full Text Available “Cloud” computing a comparatively term, stands on decades of research & analysis in virtualization, analytical distributed computing, utility computing, and more recently computer networking, web technology and software services. Cloud computing represents a shift away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centers – or “clouds”. Whilst cloud computing is obtaining growing popularity in the IT industry, academic appeared to be lagging behind the developments in this field. It also implies a service oriented designed architecture, reduced information technology overhead for the end-user, good flexibility, reduced total cost of private ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today

  3. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  4. A NEW SYSTEM ARCHITECTURE FOR PERVASIVE COMPUTING

    Directory of Open Access Journals (Sweden)

    Anis ISMAIL

    2011-08-01

    Full Text Available We present new system architecture, a distributed framework designed to support pervasive computingapplications. We propose a new architecture consisting of a search engine and peripheral clients thataddresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Keyfeatures of our application are a type-aware data transport that is capable of extract data, and presentdata through handheld devices (PDA (personal digital assistant, mobiles, etc. Pervasive computing usesweb technology, portable devices, wireless communications and nomadic or ubiquitous computing systems.The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitousaccess. This can be implemented on a variety of devices - PDAs, laptops, information appliances such asdigital cameras and printers. Mobile users get transparent access to resources outside their currentenvironment. We discuss our system’s architecture and its implementation. Through experimental study,we show reasonable performance and adaptation for our system’s implementation for the mobile devices.

  5. CAAD as Computer-Activated Architectural Design

    DEFF Research Database (Denmark)

    Galle, Per

    1998-01-01

    In a brief sketch, drawing on a general philosophical conception of human interaction with the world, the architectural design process is analysed in terms of two kinds of human action: interpretation and production. Both of these are seen as establishing a link between mental and material entities....... On this background two alternative roles of computers in computer-aided architectural design (CAAD) are distinguished: a passive and a more active role, where in the latter case, the computer’s capacity for symbol manipulation is utilized to influence design thinking actively. The analysis offered in this paper may...... serve at least two purposes: to provide a conceptual machinery for research and reflection on CAAD, and to clarify the notion of ‘artificial intelligence’ in the light of architectural design....

  6. Digital architecture, wearable computers and providing affinity

    DEFF Research Database (Denmark)

    Guglielmi, Michel; Johannesen, Hanne Louise

    2005-01-01

    will, through research, a workshop and participation in a cumulus competition, focus on the exploration of boundaries between digital architecture, performative space and wearable computers. Our design method in general focuses on the interplay between the performing body and the environment – between...

  7. Portable computer system architecture for the Space Station Freedom program

    Science.gov (United States)

    Alena, Richard; Liu, Yuan-Kwei; Fernquist, Alan R.

    1993-01-01

    This paper outlines various mission requirements and technical approaches that support the potential use of portable computers in several defined activities within the Space Station Freedom (SSF) program. Specifically, the use of portable computers as consoles for both spacecraft control and payload applications is presented. Various issues and proposed solutions regarding the incorporation of portable computers within the program are presented. The primary issues presented regard architecture (standard interface for expansion, advanced processors and displays), integration (methods of high-speed data communication, peripheral interfaces, and interconnectivity within various support networks), and evolution (wireless communications and multimedia data interface methods).

  8. A Dualistic Model To Describe Computer Architectures

    Science.gov (United States)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  9. Efficient Architectural Framework for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Souvik Pal

    2012-06-01

    Full Text Available Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.

  10. Smart SOA platforms in cloud computing architectures

    CERN Document Server

    Exposito , Ernesto

    2014-01-01

    This book is intended to introduce the principles of the Event-Driven and Service-Oriented Architecture (SOA 2.0) and its role in the new interconnected world based on the cloud computing architecture paradigm. In this new context, the concept of "service" is widely applied to the hardware and software resources available in the new generation of the Internet. The authors focus on how current and future SOA technologies provide the basis for the smart management of the service model provided by the Platform as a Service (PaaS) layer.

  11. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  12. The Warp computer: Architecture, implementation, and performance

    Energy Technology Data Exchange (ETDEWEB)

    Annaratone, M.; Arnould, E.; Gross, T.; Kung, H.T.; Lam, M.; Menzilcioglu, O.; Webb, J.A.

    1987-12-01

    The Warp machine is a systolic array computer of linearly connected cells, each of which is a programmable processor capable of performing 10 million floating-point operations per second (10 MFLOPS). A typical Warp array includes ten cells, thus having a peak computation rate of 100 MFLOPS. The Warp array can be extended to include more cells to accommodate applications capable of using the increased computational bandwidth. Warp is integrated as an attached processor into a Unix host system. Programs for Warp are written in a high-level language supported by an optimizing complier. This paper describes the architecture, implementation, and performance of the Warp machine. Each major architectural decision is discussed and evaluated with system, software, and application considerations. The programming model and tools developed for the machine are also described. The paper concludes with performance data for a large number of applications.

  13. Computational electromagnetics recent advances and engineering applications

    CERN Document Server

    2014-01-01

    Emerging Topics in Computational Electromagnetics in Computational Electromagnetics presents advances in Computational Electromagnetics. This book is designed to fill the existing gap in current CEM literature that only cover the conventional numerical techniques for solving traditional EM problems. The book examines new algorithms, and applications of these algorithms for solving problems of current interest that are not readily amenable to efficient treatment by using the existing techniques. The authors discuss solution techniques for problems arising in nanotechnology, bioEM, metamaterials, as well as multiscale problems. They present techniques that utilize recent advances in computer technology, such as parallel architectures, and the increasing need to solve large and complex problems in a time efficient manner by using highly scalable algorithms.

  14. Computational Strategies for the Architectural Design of Bending Active Structures

    DEFF Research Database (Denmark)

    Tamke, Martin; Nicholas, Paul

    2013-01-01

    Active bending introduces a new level of integration into the design of architectural structures, and opens up new complexities for the architectural design process. In particular, the introduction of material variation reconfigures the design space. Through the precise specification...... of their stiffness, it is possible to control and pre-calibrate the bending behaviour of a composite element. This material capacity challenges architecture’s existing methods for design, specification and prediction. In this paper, we demonstrate how architects might connect the designed nature of composites...... with the design of bending-active structures, through computational strategies. We report three built structures that develop architecturally oriented design methods for bending-active systems using composite materials. These projects demonstrate the application and limits of the introduction of advanced...

  15. CAAD as Computer-Activated Architectural Design

    DEFF Research Database (Denmark)

    Galle, Per

    1998-01-01

    In a brief sketch, drawing on a general philosophical conception of human interaction with the world, the architectural design process is analysed in terms of two kinds of human action: interpretation and production. Both of these are seen as establishing a link between mental and material entiti...... serve at least two purposes: to provide a conceptual machinery for research and reflection on CAAD, and to clarify the notion of ‘artificial intelligence’ in the light of architectural design.......In a brief sketch, drawing on a general philosophical conception of human interaction with the world, the architectural design process is analysed in terms of two kinds of human action: interpretation and production. Both of these are seen as establishing a link between mental and material entities....... On this background two alternative roles of computers in computer-aided architectural design (CAAD) are distinguished: a passive and a more active role, where in the latter case, the computer’s capacity for symbol manipulation is utilized to influence design thinking actively. The analysis offered in this paper may...

  16. Roadmap to the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  17. Algorithms versus architectures for computational chemistry

    Science.gov (United States)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  18. Advances in physiological computing

    CERN Document Server

    Fairclough, Stephen H

    2014-01-01

    This edited collection will provide an overview of the field of physiological computing, i.e. the use of physiological signals as input for computer control. It will cover a breadth of current research, from brain-computer interfaces to telemedicine.

  19. Quantum computer of wire circuit architecture

    CERN Document Server

    Moiseev, S A; Andrianov, S N

    2010-01-01

    First solid state quantum computer was built using transmons (cooper pair boxes). The operation of the computer is limited because of using a number of the rigit cooper boxes working with fixed frequency at temperatures of superconducting material. Here, we propose a novel architecture of quantum computer based on a flexible wire circuit of many coupled quantum nodes containing controlled atomic (molecular) ensembles. We demonstrate wide opportunities of the proposed computer. Firstly, we reveal a perfect storage of external photon qubits to multi-mode quantum memory node and demonstrate a reversible exchange of the qubits between any arbitrary nodes. We found optimal parameters of atoms in the circuit and self quantum modes for quantum processing. The predicted perfect storage has been observed experimentally for microwave radiation on the lithium phthalocyaninate molecule ensemble. Then also, for the first time we show a realization of the efficient basic two-qubit gate with direct coupling of two arbitrary...

  20. Fast semivariogram computation using FPGA architectures

    Science.gov (United States)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  1. Bilevel Architecture for High—Thronghput Computing

    Institute of Scientific and Technical Information of China (English)

    PavelNewski; AlexandreVaniachine; 等

    2001-01-01

    We have prototyped and analyzed design of a novel approach for the high throughput computing-a core element for the emerging HENP computational grid.Independent event processing in HENP is well suted for computing in parallel.The prototype facilitateds use of inexpensive mass-market components by poviding fault tolerant resilienece (instead of the expensive total system reliablity) via highly scalable management components. The ability to handle both hardware and software failures on a large dedicated HENP facility limits the need for user intervention.A robust data management is especially important in HENP computing since large data-flows occur before and /or atfer each processing task.The architecture of our active object object coordination schema implements a multi-level hierarchical agent model,It provides fault tolerance by splitting a large overall task into independent atomic processes,performed by lower level agents synchronizing each other via a local database.Necessary control function performed by higher level agents interact with the same database thus managing distributed data production.The system has been tested in production environment for simulations in the STAR experiment at RHIC.Our architectural prototype controlled processes on more than a hundred processors at a time and has run for extended periods of time.Twenty terabytes of simulated data hava been produced.The generic nature of our two level architectural solution fault tolerance in distributed environment has been demonstrated by ist successful test for the grid file replication services between BNL and LBNL.

  2. Developing a Distributed Computing Architecture at Arizona State University.

    Science.gov (United States)

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  3. Software development strategies for parallel computer architectures

    Science.gov (United States)

    Gruber, Ralf; Cooper, W. Anthony; Beniston, Martin; Gengler, Marc; Merazzi, Silvio

    1991-09-01

    As pragmatic users of high performance supercomputers, we believe that nowadays parallel computer architectures with disturbed memories are not yet mature to be used by a wide range of application engineers. A big effort should be made to bring these very promising computers closer to the users. One major flaw of massively parallel machines is that the programmer has to take care himself of the data flow which is often different on different parallel computers. To overcome this problem, we propose that data structures be standardized. The data base then can become an integrated part of the system and the data flow for a given algorithm can be easily prescribed. Fixing data structures forces the computer manufacturer to rather adapt his machine to user's demands and not, as it happens now, the user has to adapt to the innovative computer science approach of the computer manufacturer. In this paper, we present data standards chosen for our ASTRID programming platform for research scientist and engineers, as well as a plasma physics application which won the Cray Gigaflop Performance Awards 1989 and 1990 and which was succesfully ported on an INTEL iPSC/2 hypercube.

  4. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  5. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  6. Comparing the architecture of Grid Computing and Cloud Computing systems

    Directory of Open Access Journals (Sweden)

    Abdollah Doavi

    2015-09-01

    Full Text Available Grid Computing or computational connected networks is a new network model that allows the possibility of massive computational operations using the connected resources, in fact, it is a new generation of distributed networks. Grid architecture is recommended because the widespread nature of the Internet makes an exciting environment called 'Grid' to create a scalable system with high-performance, generalized and secure. Then the central architecture called to this goal is a firmware named GridOS. The term 'cloud computing' means the development and deployment of Internet –based computing technology. This is a style of computing in an environment where IT-related capabilities offered as a service or users services. And it allows him/her to have access to technology-based services on the Internet; without the user having the specific information about this technology or (s he wants to take control of the IT infrastructure supported by him/her. In the paper, general explanations are given about the systems Grid and Cloud. Then their provided components and services are checked by these systems and their security.

  7. Computing Architecture for the ngVLA

    Science.gov (United States)

    Kern, Jeffrey S.; Glendenning, Brian; Hiriart, R.

    2017-01-01

    Computing challenges for the Next Generation Very Large Array (ngVLA) are not always the ones that first come to mind. Current design concepts have visibility data rates which allow the permanent storage of the raw visibility data, and although challenging, the calibration and imaging processing for the ngVLA is not beyond the capabilities of existing systems (let alone those that will exist when ngVLA construction is completed). Design goals include a system that supports a wide range of PI-driven projects, end to end data management, and the production of science ready data products. This should be accomplished while minimizing the operating costs of an array consisting of hundreds of elements distributed over an area of nearly 100,000 km2. We discuss a proposed architecture of the computing system, design constraints for a detailed design, and some possible design choices and their implications.

  8. Recent Advances in Evolutionary Computation

    Institute of Scientific and Technical Information of China (English)

    Xin Yao; Yong Xu

    2006-01-01

    Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of "biological evolution" toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily "evolutionary". This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.

  9. A new data architecture for advancing life cycle assessment

    Science.gov (United States)

    IntroductionLife cycle assessment (LCA) has a technical architecture that limits data interoperability, transparency, and automated integration of external data. More advanced information technologies offer promise for increasing the ease with which information can be synthesized...

  10. A new data architecture for advancing life cycle assessment

    Science.gov (United States)

    IntroductionLife cycle assessment (LCA) has a technical architecture that limits data interoperability, transparency, and automated integration of external data. More advanced information technologies offer promise for increasing the ease with which information can be synthesized...

  11. Advanced Ground Systems Maintenance Enterprise Architecture Project

    Science.gov (United States)

    Perotti, Jose M. (Compiler)

    2015-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.

  12. ANIMAC: a multiprocessor architecture for real time computer animation

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, D.S.

    1985-01-01

    Advances in integrated circuit technology have been largely responsible for the growth of the computer graphic industry. This technology promises additional growth through the remainder of the century. This dissertation addresses how this future technology can be harnessed and used to construct very high performance real-time computer graphics systems. A new architecture is proposed for real-time animation engines. The ANIMAC architecture achieves high performance by utilizing a two-dimensional array of processors that determine visible surfaces in parallel. An array of sixteen processors with only nearest neighbor interprocessor communications can produce real-time shadowed images of scenes containing 100,000 triangles. The ANIMAC architecture is based upon analysis and simulations of various parallelization techniques. These simulations suggest that the viewing space be spatially subdivided and that each processor produce a visible surface image for several viewing space subvolumes. Simple assignments of viewing space subvolumes to processors are shown to offer high parallel efficiencies. Simulations of parallel algorithms were driven with data derived from real scenes since analysis of scene composition suggested that using simplistic models of scene composition might lead to incorrect results.

  13. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    Science.gov (United States)

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  14. Teaching Computer Organization and Architecture Using Simulation and FPGA Applications

    OpenAIRE

    2007-01-01

    This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemente...

  15. 统一计算设备架构技术的应用研究进展%Advances in Application of Compute Unified Device Architecture Technology

    Institute of Scientific and Technical Information of China (English)

    许建平

    2011-01-01

    Compute Unified Device Architecture(CUDA) is the recently released brand new parallel computing frame- work for Graphics Processing Units(GPU) by NVIDIA. By virtue of its C-compatibility and the powerful parallel computing ability of GPU, CUDA has achieved superior acceleration performance in various areas such as image processing, high per- formance computing. Based on summarizing the application of CUDA, this paper especially introduces the acceleration principles with the aid of CUDA technology. MeanwhiIe, the development of CUDA is also discussed.%统一计算设备架构(ComputeUnifiedDeviceArchitecture,CUDA)是NVIDIA公司近年来推出的针对图像处理单元(GraphicsProcessingUnit,GPU)的全新并行计算框架。借助其C语言兼容特性以及GPU的强大并行计算能力,CUDA技术在图像处理、科学计算等领域取得了良好的加速效果。文章在对CUDA技术的应用情况进行回顾和总结的基础上,重点介绍了不同应用中采用CUDA技术进行计算加速的原理,并探讨了CUDA技术今后的发展方向。

  16. On Architectural Acoustics Design using Computer Simulation

    DEFF Research Database (Denmark)

    Schmidt, Anne Marie Due; Kirkegaard, Poul Henning

    2004-01-01

    The acoustical quality of a given building, or space within the building, is highly dependent on the architectural design. Architectural acoustics design has in the past been based on simple design rules. However, with a growing complexity in the architectural acoustic and the emergence of potent...

  17. Computation, architectural design and fabrication logic

    DEFF Research Database (Denmark)

    Larsen, Niels Martin

    2016-01-01

    Digital fabrication and digital form generation can change the way different professions interact in relation to the development and construction of architecture. The technologies can provide a more integrated design process and expand the architectural vocabulary. At Aarhus School of Architecture...

  18. Recent advances in computational optimization

    CERN Document Server

    2013-01-01

    Optimization is part of our everyday life. We try to organize our work in a better way and optimization occurs in minimizing time and cost or the maximization of the profit, quality and efficiency. Also many real world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization. This book presents recent advances in computational optimization. The volume includes important real world problems like parameter settings for con- trolling processes in bioreactor, robot skin wiring, strip packing, project scheduling, tuning of PID controller and so on. Some of them can be solved by applying traditional numerical methods, but others need a huge amount of computational resources. For them it is shown that is appropriate to develop algorithms based on metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming etc...

  19. International Conference on Advanced Computing

    CERN Document Server

    Patnaik, Srikanta

    2014-01-01

    This book is composed of the Proceedings of the International Conference on Advanced Computing, Networking, and Informatics (ICACNI 2013), held at Central Institute of Technology, Raipur, Chhattisgarh, India during June 14–16, 2013. The book records current research articles in the domain of computing, networking, and informatics. The book presents original research articles, case-studies, as well as review articles in the said field of study with emphasis on their implementation and practical application. Researchers, academicians, practitioners, and industry policy makers around the globe have contributed towards formation of this book with their valuable research submissions.

  20. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  1. On Architectural Acoustics Design using Computer Simulation

    DEFF Research Database (Denmark)

    Schmidt, Anne Marie Due; Kirkegaard, Poul Henning

    2004-01-01

    is to investigate the field of application an acoustic simulation program can have during an architectural acoustics design process. A case study is carried out in order to represent the iterative working process of an architect. The working process is divided into five phases and represented by typical results......The acoustical quality of a given building, or space within the building, is highly dependent on the architectural design. Architectural acoustics design has in the past been based on simple design rules. However, with a growing complexity in the architectural acoustic and the emergence of potent...... room acoustic simulation programs it is now possible to subjectively analyze and evaluate acoustic properties prior to the actual construction of a facility. With the right tools applied, the acoustic design can become an integrated part of the architectural design process. The aim of the present paper...

  2. Advanced LVDC Electrical Power Architectures and Microgrids

    DEFF Research Database (Denmark)

    Dragicevic, Tomislav; Vasquez, Juan Carlos; Guerrero, Josep M.

    2014-01-01

    the high set goals for share of renewable energy sources (RESs) in satisfying total demand. RESs operate either natively at DC or have a DC link in the heart of their power electronic interface, whereas the end point connection of electronic loads, batteries and fuel cells is exclusively DC. Therefore......Current trends indicate that worldwide electricity distribution networks are experiencing a transformation towards direct-current (DC) at both generation and consumption level. This tendency is powered by the outburst of various electronic loads and, at the same time, with the struggle to meet......, merging these devices into dedicated DC distribution architectures through corresponding DC-DC converters arises as an attractive option not only in terms of enhancing efficiency due to reduction of conversion steps, but also for having power quality independence from the utility mains. These kinds...

  3. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  4. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  5. On architectural acoustic design using computer simulation

    DEFF Research Database (Denmark)

    Schmidt, Anne Marie Due; Kirkegaard, Poul Henning

    2004-01-01

    acoustic design process. The emphasis is put on the first three out of five phases in the working process of the architect and a case study is carried out in which each phase is represented by typical results ? as exemplified with reference to the design of Bagsværd Church by Jørn Utzon. The paper......Architectural acoustics design has in the past been based on simple design rules. However, with a growing complexity in architectural acoustics and the emergence of room acoustic simulation programmes with considerable potential, it is now possible to subjectively analyse and evaluate acoustic...... properties prior to the actual construction of a building. With the right tools applied, acoustic design can become an integral part of the architectural design process. The aim of this paper is to investigate the field of application that an acoustic simulation programme can have during an architectural...

  6. On architectural acoustic design using computer simulation

    DEFF Research Database (Denmark)

    Schmidt, Anne Marie Due; Kirkegaard, Poul Henning

    2004-01-01

    Architectural acoustics design has in the past been based on simple design rules. However, with a growing complexity in architectural acoustics and the emergence of room acoustic simulation programmes with considerable potential, it is now possible to subjectively analyse and evaluate acoustic...... properties prior to the actual construction of a building. With the right tools applied, acoustic design can become an integral part of the architectural design process. The aim of this paper is to investigate the field of application that an acoustic simulation programme can have during an architectural...... acoustic design process. The emphasis is put on the first three out of five phases in the working process of the architect and a case study is carried out in which each phase is represented by typical results ? as exemplified with reference to the design of Bagsværd Church by Jørn Utzon. The paper...

  7. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.

    Science.gov (United States)

    Menges, Achim

    2012-03-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.

  8. Recent advances in Z-technology architecture

    Science.gov (United States)

    Ludwig, David E.; Smetana, Daryl; Shanken, Stuart

    1989-09-01

    Z-technology utilizes the process of stacking integrated circuits (ICs) to achieve a high degree of packaging density. This technique has been most commonly applied to packaging read out electronics for infrared (IR) focal plane arrays to achieve more signal processing at the detector interface. Irvine Sensor Corporation's (ISC's) standard packaging technology, called HYMOSS (Hybrid Mosaic On Stacked Silicon), has been tailored for stacking 0.004-inch thick silicon integrated circuits of custom designed read out electronics. New advances have been made which allow for stacking; non-silicon ICs, commercial (non-custom) circuits, and/or ICs which have been thinned to 0.002 inches.

  9. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  10. Baseline Requirements and Architecture for Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Abdur Rahim Choudhary

    2012-12-01

    Full Text Available Government initiatives such as the “Cloud First” policy are bringing the cloud computing services into Federal Agencies. Further, many of the sectors in the Critical Infrastructure of the nation already use cloud computing. Although cloud computing services are slowly coming to age, many issues remain. This paper therefore takes a closer look at the cloud computing services. First it establishes a baseline by specifying high level requirements for cloud computing services. Next it improves upon the current architecture for the cloud computing services by adding new modules to the current architecture. The new modules are gleaned from an analysis of the telecommunications cloud and security in distributed systems. The new modules include a management and control network, a set of trust domains, and a set of proxies. The improved architecture is more ready for primetime use and supports a richer operational model.

  11. Architecture Design of Computing Intensive SoCs

    Institute of Scientific and Technical Information of China (English)

    YUE Yao; ZHANG Chunming; WANG Haixin; BAI Guoqiang; CHEN Hongyi

    2009-01-01

    Most existing system-on-chip (SoC) architectures are for microprocessor-centric designs. They are not suitable for computing intensive SoCs, which have their own configurability, extendibility, perform-ance, and data exchange characteristics. This paper analyzes these characteristics and gives design princi-ples for computing intensive SoCs. Three architectures suitable for different situations are compared with selection criteria given. The architectural design of a high performance network security accelerator (HPNSA) is used to elaborate on the design techniques to fully exploit the performance potential of the ar-chitectures. A behavior-level simulation system is implemented with the C++ programming language to evaluate the HPNSA performance and to obtain the optimum system design parameters. Simulations show that this architecture provides high performance data transfer.

  12. Triangular Dynamic Architecture for Distributed Computing in a LAN Environment

    CERN Document Server

    Hossain, M Shahriar; Fuad, M Muztaba; Deb, Debzani

    2011-01-01

    A computationally intensive large job, granulized to concurrent pieces and operating in a dynamic environment should reduce the total processing time. However, distributing jobs across a networked environment is a tedious and difficult task. Job distribution in a Local Area Network based on Triangular Dynamic Architecture (TDA) is a mechanism that establishes a dynamic environment for job distribution, load balancing and distributed processing with minimum interaction from the user. This paper introduces TDA and discusses its architecture and shows the benefits gained by utilizing such architecture in a distributed computing environment.

  13. Supervisory Control System Architecture for Advanced Small Modular Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M [ORNL; Cole, Daniel L [University of Pittsburgh; Fugate, David L [ORNL; Kisner, Roger A [ORNL; Melin, Alexander M [ORNL; Muhlheim, Michael David [ORNL; Rao, Nageswara S [ORNL; Wood, Richard Thomas [ORNL

    2013-08-01

    This technical report was generated as a product of the Supervisory Control for Multi-Modular SMR Plants project within the Instrumentation, Control and Human-Machine Interface technology area under the Advanced Small Modular Reactor (SMR) Research and Development Program of the U.S. Department of Energy. The report documents the definition of strategies, functional elements, and the structural architecture of a supervisory control system for multi-modular advanced SMR (AdvSMR) plants. This research activity advances the state-of-the art by incorporating decision making into the supervisory control system architectural layers through the introduction of a tiered-plant system approach. The report provides a brief history of hierarchical functional architectures and the current state-of-the-art, describes a reference AdvSMR to show the dependencies between systems, presents a hierarchical structure for supervisory control, indicates the importance of understanding trip setpoints, applies a new theoretic approach for comparing architectures, identifies cyber security controls that should be addressed early in system design, and describes ongoing work to develop system requirements and hardware/software configurations.

  14. CAAD: Computer Architecture for Autonomous Driving

    OpenAIRE

    Liu, Shaoshan; Tang, Jie; Zhang, Zhe; Gaudiot, Jean-Luc

    2017-01-01

    We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.

  15. Integrated computer control system architectural overview

    Energy Technology Data Exchange (ETDEWEB)

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  16. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    Science.gov (United States)

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors.

  17. A Security Kernel Architecture Based Trusted Computing Platform

    Institute of Scientific and Technical Information of China (English)

    CHEN You-lei; SHEN Chang-xiang

    2005-01-01

    A security kernel architecture built on trusted computing platform in the light of thinking about trusted computing is presented. According to this architecture, a new security module TCB (Trusted Computing Base) is added to the operation system kernel and two operation interface modes are provided for the sake of self-protection. The security kernel is divided into two parts and trusted mechanism is separated from security functionality. The TCB module implements the trusted mechanism such as measurement and attestation,while the other components of security kernel provide security functionality based on these mechanisms. This architecture takes full advantage of functions provided by trusted platform and clearly defines the security perimeter of TCB so as to assure self-security from architectural vision. We also present function description of TCB and discuss the strengths and limitations comparing with other related researches.

  18. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  19. A computational architecture for social agents

    Energy Technology Data Exchange (ETDEWEB)

    Bond, A.H. [California Institute of Technology, Pasadena, CA (United States)

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  20. A Novel Computer Architecture to Prevent Destruction by Viruses

    Institute of Scientific and Technical Information of China (English)

    高庆狮; 王月; 李磊; 陈绪; 刘宏岚

    2002-01-01

    In today's Internet computing world, illegal activities by crackers pose a serious threat to computer security. It is well known that computer viruses, Trojan horses and other intrusive programs may cause severe and often catastrophic consequences. This paper proposes a novel secure computer architecture based on security-code. Every instruction/data word is added with a security-code denoting its security level. External programs and data are automatically added with security-code by hardware when entering a computer system. Instruction with lower security-code cannot run or process instruction/data with higher security level. Security-code cannot be modified by normal instruction. With minor hardware overhead, the new architecture can effectively protect the main computer system from destruction or theft by intrusive programs such as computer viruses. For most PC systems, it includes an increase of word-length by 1 bit on registers, the memory and the hard disk.

  1. The Contribution of Visualization to Learning Computer Architecture

    Science.gov (United States)

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  2. An Integrated and Layered Architecture for Location-Aware Computing

    Institute of Scientific and Technical Information of China (English)

    MA Linbing; ZHANG Xinchang; TAO Haiyan

    2005-01-01

    This paper gives an overall introduction to the basic concept of LAC(location-aware computing) and its development status, puts forward an integrated location-aware computing architecture which is useful for designing the reasonable logical model of LBS(location-based service).Finally, a brief introduction is conducted on a LAC experimental prototype, which acts as a mobile urban tourism assistant.

  3. Towards Service Architectures in Service-oriented Computing

    Science.gov (United States)

    Mäki, Matti; Pakkala, Daniel

    Service-oriented architectures (SOA) are nowadays a widely promoted field of study in service-oriented computing (SOC) but unfortunately often discussed only in the light of enterprise IT solutions and the Web services technologies. By diving into the technical fundamentals of SOA we found a more general concept of service architectures a concept that might have much more application possibilities than its near relative, SOA. This paper presents a simple but feasible model for service architectures, based on the existing state-of-the-art research of SOC. Feasibility of some existing service platforms as service architecture realizations is evaluated against the model. The simple model provides a good starting point for researching and developing more sophisticated, service architectures, and a set of criteria for evaluating service platforms.

  4. Architecture independent environment for developing engineering software on MIMD computers

    Science.gov (United States)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  5. Fault tolerant hypercube computer system architecture

    Science.gov (United States)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  6. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...... been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware...... as a case study and an application of the Hybris graphics architecture....

  7. Design of Carborane Molecular Architectures via Electronic Structure Computations

    Directory of Open Access Journals (Sweden)

    Josep M. Oliva

    2009-01-01

    Full Text Available Quantum-mechanical electronic structure computations were employed to explore initial steps towards a comprehensive design of polycarborane architectures through assembly of molecular units. Aspects considered were (i the striking modification of geometrical parameters through substitution, (ii endohedral carboranes and proposed ejection mechanisms for energy/ion/atom/energy storage/transport, (iii the excited state character in single and dimeric molecular units, and (iv higher architectural constructs. A goal of this work is to find optimal architectures where atom/ion/energy/spin transport within carborane superclusters is feasible in order to modernize and improve future photoenergy processes.

  8. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    Science.gov (United States)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  9. Investigating Architectural Issues in Neuromorphic Computing

    Science.gov (United States)

    2012-05-01

    processors. Existing High Performance Computer (HPC) platforms, like Blue Gene /L, can be configured with more than 130K processor cores. The challenge...cluster is composed of one Intel Xeon Hexa -core processor as the head node, 22 Sony PlayStation3 (PS3) computers based on IBM Cell Broadband Engine

  10. Advances in Computer Science and Engineering

    CERN Document Server

    Second International Conference on Advances in Computer Science and Engineering (CES 2012)

    2012-01-01

    This book includes the proceedings of the second International Conference on Advances in Computer Science and Engineering (CES 2012), which was held during January 13-14, 2012 in Sanya, China. The papers in these proceedings of CES 2012 focus on the researchers’ advanced works in their fields of Computer Science and Engineering mainly organized in four topics, (1) Software Engineering, (2) Intelligent Computing, (3) Computer Networks, and (4) Artificial Intelligence Software.

  11. Addressing fundamental architectural challenges of an activity-based intelligence and advanced analytics (ABIAA) system

    Science.gov (United States)

    Yager, Kevin; Albert, Thomas; Brower, Bernard V.; Pellechia, Matthew F.

    2015-06-01

    The domain of Geospatial Intelligence Analysis is rapidly shifting toward a new paradigm of Activity Based Intelligence (ABI) and information-based Tipping and Cueing. General requirements for an advanced ABIAA system present significant challenges in architectural design, computing resources, data volumes, workflow efficiency, data mining and analysis algorithms, and database structures. These sophisticated ABI software systems must include advanced algorithms that automatically flag activities of interest in less time and within larger data volumes than can be processed by human analysts. In doing this, they must also maintain the geospatial accuracy necessary for cross-correlation of multi-intelligence data sources. Historically, serial architectural workflows have been employed in ABIAA system design for tasking, collection, processing, exploitation, and dissemination. These simpler architectures may produce implementations that solve short term requirements; however, they have serious limitations that preclude them from being used effectively in an automated ABIAA system with multiple data sources. This paper discusses modern ABIAA architectural considerations providing an overview of an advanced ABIAA system and comparisons to legacy systems. It concludes with a recommended strategy and incremental approach to the research, development, and construction of a fully automated ABIAA system.

  12. Teaching Computer Organization and Architecture Using Simulation and FPGA Applications

    Directory of Open Access Journals (Sweden)

    D. K.M. Al-Aubidy

    2007-01-01

    Full Text Available This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemented to write assembly programs in this teaching tool. In addition to the micro-operation simulation, the complete configuration can be run on Xilinx Spartan-3 FPGA board. Such implementation offers good code density, easy customization, easily developed software, small area, and high performance at low cost.

  13. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  14. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  15. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  16. Characterization of UMT2013 Performance on Advanced Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Howell, Louis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-12-31

    This paper presents part of a larger effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. The focus here is on UMT2013, a proxy implementation of deterministic transport for unstructured meshes. I present weak and strong MPI scaling results and studies of OpenMP efficiency on the Sequoia BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Preliminary tests that exploit NVRAM as extended memory on an Ivy Bridge machine designed for “Big Data” applications are also included.

  17. Experimental comparison of two quantum computing architectures

    Science.gov (United States)

    Linke, Norbert M.; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A.; Wright, Kenneth; Monroe, Christopher

    2017-01-01

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www.research.ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future. PMID:28325879

  18. Experimental comparison of two quantum computing architectures.

    Science.gov (United States)

    Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher

    2017-03-28

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.

  19. Deep architectures for Human Computer Interaction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    In this work we present the application of Conditional Restricted Boltzmann Machines in Human Computer Interaction. These provide a well suited framework to model the complex temporal patterns produced from humans in the audio and video modalities. They can be trained in a semisupervised fashion and

  20. SpaceWire- Based Control System Architecture for the Lightweight Advanced Robotic Arm Demonstrator [LARAD

    Science.gov (United States)

    Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David

    2015-09-01

    The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.

  1. Advanced computer graphic techniques for laser range finder (LRF) simulation

    Science.gov (United States)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  2. Architecturing Conflict Handling of Pervasive Computing Resources

    OpenAIRE

    Jakob, Henner; Consel, Charles; Loriant, Nicolas

    2011-01-01

    International audience; Pervasive computing environments are created to support human activities in different domains (e.g., home automation and healthcare). To do so, applications orchestrate deployed services and devices. In a realistic setting, applications are bound to conflict in their usage of shared resources, e.g., controlling doors for security and fire evacuation purposes. These conflicts can have critical effects on the physical world, putting people and assets at risk. This paper ...

  3. Cloud Computing: A study of cloud architecture and its patterns

    Directory of Open Access Journals (Sweden)

    Mandeep Handa,

    2015-05-01

    Full Text Available Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Cloud computing can be defined as accessing third party software and services on web and paying as per usage. It facilitates scalability and virtualized resources over Internet as a service providing cost effective and scalable solution to customers. Cloud computing has evolved as a disruptive technology and picked up speed with the presence of many vendors in cloud computing space. The evolution of cloud computing from numerous technological approaches and business models such as SaaS, cluster computing, high performance computing, etc., signifies that the cloud IDM can be considered as a superset of all the corresponding issues from these paradigms and many more. In this paper we will discuss Life cycle management, Cloud architecture, Pattern in Cloud IDM, Volatility of Cloud relations.

  4. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    Science.gov (United States)

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  5. In-Memory Computing Architectures for Sparse Distributed Memory.

    Science.gov (United States)

    Kang, Mingu; Shanbhag, Naresh R

    2016-08-01

    This paper presents an energy-efficient and high-throughput architecture for Sparse Distributed Memory (SDM)-a computational model of the human brain [1]. The proposed SDM architecture is based on the recently proposed in-memory computing kernel for machine learning applications called Compute Memory (CM) [2], [3]. CM achieves energy and throughput efficiencies by deeply embedding computation into the memory array. SDM-specific techniques such as hierarchical binary decision (HBD) are employed to reduce the delay and energy further. The CM-based SDM (CM-SDM) is a mixed-signal circuit, and hence circuit-aware behavioral, energy, and delay models in a 65 nm CMOS process are developed in order to predict system performance of SDM architectures in the auto- and hetero-associative modes. The delay and energy models indicate that CM-SDM, in general, can achieve up to 25 × and 12 × delay and energy reduction, respectively, over conventional SDM. When classifying 16 × 16 binary images with high noise levels (input bad pixel ratios: 15%-25%) into nine classes, all SDM architectures are able to generate output bad pixel ratios (Bo) ≤ 2%. The CM-SDM exhibits negligible loss in accuracy, i.e., its Bo degradation is within 0.4% as compared to that of the conventional SDM.

  6. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    Science.gov (United States)

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  7. Some Issues on Computer Networks: Architecture and Key Technologies

    Institute of Scientific and Technical Information of China (English)

    Guan-Qun Gu; Jun-Zhou Luo

    2006-01-01

    The evolution of computer networks has experienced several major steps, and research focus of each step has been kept changing and evolving, from ARPANET to OSI/RM, then HSN (high speed network) and HPN (high performance network). During the evolution, computer networks represented by Internet have made great progress and gained unprecedented success. However, with the appearance and intensification of tussle, along with the three difficult problems (service customizing, resource control and user management) of modern network, it is found that traditional Internet and its architecture no longer meet the requirements of next generation network. Therefore, it is the next generation network that current Internet must evolve to. With the mindset of achieving valuable guidance for research on next generation network, this paper firstly analyzes some dilemmas facing current Internet and its architecture, and then surveys some recent influential research work and progresses in computer networks and related areas, including new generation network architecture, network resource control technologies, network management and security, distributed computing and middleware,wireless/mobile network, new generation network services and applications, and foundational theories on network modeling.Finally, this paper concludes that within the research on next generation network, more attention should be paid to the high availability network and corresponding architecture, key theories and supporting technologies.

  8. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    Science.gov (United States)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced

  9. ADVANCES AT A GLANCE IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    RAJKUMAR SHARMA

    2014-07-01

    Full Text Available In the history of computational world, sequential uni-processor computers have been exploited for years to solve scientific and business problems. To satisfy the demand of compute & data hungry applications, it was observed that better response time can be achieved only through parallelism. Large computational problems were partitioned and solved by using multiple CPUs in parallel. Computing performance was further improved by adopting multi-core architecture which provides hardware parallelism through use of multiple cores. Efficient resource utilization of a parallel computing environment by using software and hardware parallelism is a major research challenge. The present hardware technologies provide freedom to algorithm developers for control & management of resources through software codes, such as threads-to-cores mapping in recent multi-core processors. In this paper, a survey is presented since beginning of parallel computing up to the use of present state-of-art multi-core processors.

  10. Advanced computing in electron microscopy

    CERN Document Server

    Kirkland, Earl J

    2010-01-01

    This book features numerical computation of electron microscopy images as well as multislice methods High resolution CTEM and STEM image interpretation are included in the text This newly updated second edition will bring the reader up to date on new developments in the field since the 1990's The only book that specifically addresses computer simulation methods in electron microscopy

  11. Advanced in Computer Science and its Applications

    CERN Document Server

    Yen, Neil; Park, James; CSA 2013

    2014-01-01

    The theme of CSA is focused on the various aspects of computer science and its applications for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of computer science and its applications. Therefore this book will be include the various theories and practical applications in computer science and its applications.

  12. Advances in computational complexity theory

    CERN Document Server

    Cai, Jin-Yi

    1993-01-01

    This collection of recent papers on computational complexity theory grew out of activities during a special year at DIMACS. With contributions by some of the leading experts in the field, this book is of lasting value in this fast-moving field, providing expositions not found elsewhere. Although aimed primarily at researchers in complexity theory and graduate students in mathematics or computer science, the book is accessible to anyone with an undergraduate education in mathematics or computer science. By touching on some of the major topics in complexity theory, this book sheds light on this burgeoning area of research.

  13. Innovative architectures for dense multi-microprocessor computers

    Science.gov (United States)

    Larson, Robert E.

    1989-01-01

    The purpose is to summarize a Phase 1 SBIR project performed for the NASA/Langley Computational Structural Mechanics Group. The project was performed from February to August 1987. The main objectives of the project were to: (1) expand upon previous research into the application of chordal ring architectures to the general problem of designing multi-microcomputer architectures, (2) attempt to identify a family of chordal rings such that each chordal ring can be simply expanded to produce the next member of the family, (3) perform a preliminary, high-level design of an expandable multi-microprocessor computer based upon chordal rings, (4) analyze the potential use of chordal ring based multi-microprocessors for sparse matrix problems and other applications arising in computational structural mechanics.

  14. International Conference on Advanced Computing for Innovation

    CERN Document Server

    Angelova, Galia; Agre, Gennady

    2016-01-01

    This volume is a selected collection of papers presented and discussed at the International Conference “Advanced Computing for Innovation (AComIn 2015)”. The Conference was held at 10th -11th of November, 2015 in Sofia, Bulgaria and was aimed at providing a forum for international scientific exchange between Central/Eastern Europe and the rest of the world on several fundamental topics of computational intelligence. The papers report innovative approaches and solutions in hot topics of computational intelligence – advanced computing, language and semantic technologies, signal and image processing, as well as optimization and intelligent control.

  15. Advances in Orion's On-Orbit Guidance and Targeting System Architecture

    Science.gov (United States)

    Scarritt, Sara K.; Fill, Thomas; Robinson, Shane

    2015-01-01

    NASA's manned spaceflight programs have a rich history of advancing onboard guidance and targeting technology. In order to support future missions, the guidance and targeting architecture for the Orion Multi-Purpose Crew Vehicle must be able to operate in complete autonomy, without any support from the ground. Orion's guidance and targeting system must be sufficiently flexible to easily adapt to a wide array of undecided future missions, yet also not cause an undue computational burden on the flight computer. This presents a unique design challenge from the perspective of both algorithm development and system architecture construction. The present work shows how Orion's guidance and targeting system addresses these challenges. On the algorithm side, the system advances the state-of-the-art by: (1) steering burns with a simple closed-loop guidance strategy based on Shuttle heritage, and (2) planning maneuvers with a cutting-edge two-level targeting routine. These algorithms are then placed into an architecture designed to leverage the advantages of each and ensure that they function in concert with one another. The resulting system is characterized by modularity and simplicity. As such, it is adaptable to the on-orbit phases of any future mission that Orion may attempt.

  16. Architectural design of an advanced naturally ventilated building form

    Energy Technology Data Exchange (ETDEWEB)

    Lomas, K.J. [De Montfort University, Leicester (United Kingdom). Institute of Energy and Sustainable Development

    2007-02-15

    Advanced stack-ventilated buildings have the potential to consume much less energy for space conditioning than typical mechanically ventilated or air-conditioned buildings. This paper describes how environmental design considerations in general, and ventilation considerations in particular, shape the architecture of advanced naturally ventilated (ANV) buildings. The attributes of simple and advanced naturally ventilated buildings are described and a taxonomy of ANV buildings presented. Simple equations for use at the preliminary design stage are presented. These produce target structural cross section areas for the key components of ANV systems. The equations have been developed through practice-based research to design three large educational buildings: the Frederick Lanchester Library, Coventry, UK; the School of Slavonic and East European Studies, London, UK; the Harm A. Weber Library, Elgin, near Chicago, USA. These buildings are briefly described and the sizes of the as-built ANV features compared with the target values for use in preliminary design. The three buildings represent successive evolutionary stages: from advanced natural ventilation, to ANV with passive downdraught cooling, and finally ANV with HVAC support. Hopefully the guidance, simple calculation tools and case study examples will give architects and environmental design consultants confidence to embark on the design of ANV buildings. (author)

  17. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  18. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  19. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  20. How the Common Component Architecture Advances Compuational Science

    Energy Technology Data Exchange (ETDEWEB)

    Kumfert, G; Bernholdt, D; Epperly, T; Kohl, J; McInnes, L C; Parker, S; Ray, J

    2006-06-19

    Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2.

  1. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  2. OS friendly microprocessor architecture: Hardware level computer security

    Science.gov (United States)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  3. Using EDUCache Simulator for the Computer Architecture and Organization Course

    Directory of Open Access Journals (Sweden)

    Sasko Ristov

    2013-07-01

    Full Text Available The computer architecture and organization course is essential in all computer science and engineering programs, and the most selected and liked elective course for related engineering disciplines. However, the attractiveness brings a new challenge, it requires a lot of effort by the instructor, to explain rather complicated concepts to beginners or to those who study related disciplines. The usage of visual simulators can improve both the teaching and learning processes. The overall goal is twofold: 1~to enable a visual environment to explain the basic concepts and 2~to increase the student's willingness and ability to learn the material.A lot of visual simulators have been used for the computer architecture and organization course. However, due to the lack of visual simulators for simulation of the cache memory concepts, we have developed a new visual simulator EDUCache simulator. In this paper we present that it can be effectively and efficiently used as a supporting tool in the learning process of modern multi-layer, multi-cache and multi-core multi-processors.EDUCache's features enable an environment for performance evaluation and engineering of software systems, i.e. the students will also understand the importance of computer architecture building parts and hopefully, will increase their curiosity for hardware courses in general.

  4. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  5. Preface (to: Advances in Computer Entertainment)

    NARCIS (Netherlands)

    Romão, Teresa; Nijholt, Antinus; Romão, Teresa; Reidsma, Dennis

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant

  6. Computational Advances for and from Bayesian Analysis

    OpenAIRE

    Andrieu, C.; Doucet, A.; Robert, C. P.

    2004-01-01

    The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistical computation are intermingled.

  7. Preface (to: Advances in Computer Entertainment)

    NARCIS (Netherlands)

    Romão, Teresa; Nijholt, Antinus; Romão, Teresa; Reidsma, Dennis

    2012-01-01

    These are the proceedings of the 9th International Conference on Advances in Computer Entertainment ACE 2012). ACE has become the leading scientific forum for dissemination of cutting-edge research results in the area of entertainment computing. Interactive entertainment is one of the most vibrant a

  8. Nanotube devices based crossbar architecture: toward neuromorphic computing.

    Science.gov (United States)

    Zhao, W S; Agnus, G; Derycke, V; Filoramo, A; Bourgoin, J-P; Gamrat, C

    2010-04-30

    Nanoscale devices such as carbon nanotube and nanowires based transistors, memristors and molecular devices are expected to play an important role in the development of new computing architectures. While their size represents a decisive advantage in terms of integration density, it also raises the critical question of how to efficiently address large numbers of densely integrated nanodevices without the need for complex multi-layer interconnection topologies similar to those used in CMOS technology. Two-terminal programmable devices in crossbar geometry seem particularly attractive, but suffer from severe addressing difficulties due to cross-talk, which implies complex programming procedures. Three-terminal devices can be easily addressed individually, but with limited gain in terms of interconnect integration. We show how optically gated carbon nanotube devices enable efficient individual addressing when arranged in a crossbar geometry with shared gate electrodes. This topology is particularly well suited for parallel programming or learning in the context of neuromorphic computing architectures.

  9. Advanced Sensor Platform to Evaluate Manloads For Exploration Suit Architectures

    Science.gov (United States)

    McFarland, Shane; Pierce, Gregory

    2016-01-01

    Space suit manloads are defined as the outer bounds of force that the human occupant of a suit is able to exert onto the suit during motion. They are defined on a suit-component basis as a unit of maximum force that the suit component in question must withstand without failure. Existing legacy manloads requirements are specific to the suit architecture of the EMU and were developed in an iterative fashion; however, future exploration needs dictate a new suit architecture with bearings, load paths, and entry capability not previously used in any flight suit. No capability currently exists to easily evaluate manloads imparted by a suited occupant, which would be required to develop requirements for a flight-rated design. However, sensor technology has now progressed to the point where an easily-deployable, repeatable and flexible manloads measuring technique could be developed leveraging recent advances in sensor technology. INNOVATION: This development positively impacts schedule, cost and safety risk associated with new suit exploration architectures. For a final flight design, a comprehensive and accurate man loads requirements set must be communicated to the contractor; failing that, a suit design which does not meet necessary manloads limits is prone to failure during testing or worse, during an EVA, which could cause catastrophic failure of the pressure garment posing risk to the crew. This work facilitates a viable means of developing manloads requirements using a range of human sizes & strengths. OUTCOME / RESULTS: Performed sensor market research. Highlighted three viable options (primary, secondary, and flexible packaging option). Designed/fabricated custom bracket to evaluate primary option on a single suit axial. Manned suited manload testing completed and general approach verified.

  10. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    Science.gov (United States)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  11. A hybrid computational grid architecture for comparative genomics.

    Science.gov (United States)

    Singh, Aarti; Chen, Chen; Liu, Weiguo; Mitchell, Wayne; Schmidt, Bertil

    2008-03-01

    Comparative genomics provides a powerful tool for studying evolutionary changes among organisms, helping to identify genes that are conserved among species, as well as genes that give each organism its unique characteristics. However, the huge datasets involved makes this approach impractical on traditional computer architectures leading to prohibitively long runtimes. In this paper, we present a new computational grid architecture based on a hybrid computing model to significantly accelerate comparative genomics applications. The hybrid computing model consists of two types of parallelism: coarse grained and fine grained. The coarse-grained parallelism uses a volunteer computing infrastructure for job distribution, while the fine-grained parallelism uses commodity computer graphics hardware for fast sequence alignment. We present the deployment and evaluation of this approach on our grid test bed for the all-against-all comparison of microbial genomes. The results of this comparison are then used by phenotype--genotype explorer (PheGee). PheGee is a new tool that nominates candidate genes responsible for a given phenotype.

  12. Advance Trends in Soft Computing

    CERN Document Server

    Kreinovich, Vladik; Kacprzyk, Janusz; WCSC 2013

    2014-01-01

    This book is the proceedings of the 3rd World Conference on Soft Computing (WCSC), which was held in San Antonio, TX, USA, on December 16-18, 2013. It presents start-of-the-art theory and applications of soft computing together with an in-depth discussion of current and future challenges in the field, providing readers with a 360 degree view on soft computing. Topics range from fuzzy sets, to fuzzy logic, fuzzy mathematics, neuro-fuzzy systems, fuzzy control, decision making in fuzzy environments, image processing and many more. The book is dedicated to Lotfi A. Zadeh, a renowned specialist in signal analysis and control systems research who proposed the idea of fuzzy sets, in which an element may have a partial membership, in the early 1960s, followed by the idea of fuzzy logic, in which a statement can be true only to a certain degree, with degrees described by numbers in the interval [0,1]. The performance of fuzzy systems can often be improved with the help of optimization techniques, e.g. evolutionary co...

  13. Proceedings: Workshop on Advanced Mathematics and Computer Science for Power Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    None

    1991-08-01

    EPRI's Office of Exploratory Research sponsors a series of workshops that explore how to apply recent advances in mathematics and computer science to the problems of the electric utility industry. In this workshop, participants identified research objectives that may significantly improve the mathematical methods and computer architecture currently used for power system analysis.

  14. Towards Energy-Centric Computing and Computer Architecture

    CERN Document Server

    CERN. Geneva

    2010-01-01

    Technology forecasts indicate that device scaling will continue well into the next decade.  Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistors into performance due to a number of technological, circuit, architectural, methodological and  programming challenges.In this talk, I will argue that the key emerging showstopper is power.  Voltage scaling as a means to maintain a constant power envelope with an increase in transistor  numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look  for drastic measures to cut power. This is definitely the case for server chips in future datacenters, where abundant server parallelism, redundancy and 3D chip integration are likely to remove  programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present  results backing this argument based on validated models for f...

  15. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  16. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  17. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  18. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  19. An evolutionary method for synthesizing technological planning and architectural advance

    Science.gov (United States)

    Cole, Bjorn Forstrom

    In the development of systems with ever-increasing performance and/or decreasing drawbacks, there inevitably comes a point where more progress is available by shifting to a new set of principles of use. This shift marks a change in architecture, such as between the piston-driven propeller and the jet engine. The shift also often involves an abandonment of previous competencies that have been developed with great effort, and so a foreknowledge of these shifts can be advantageous. A further motivation for this work is the consideration of the Micro Autonomous Systems and Technology (MAST) project, which aims to develop very small (final graph-based genetic algorithm. This algorithm is then implemented in a design code called Sindri, which leverages a commercial design tool named Pacelab. The first chapters of this thesis provide context and a philosophical background to the studies and research that was conducted. In particular, the idea that technology progresses in a fundamentally gradual way is developed and supported with previous historical research. The import of this is that the future can to some degree be predicted by the past, provided that the appropriate technological antecedents are accounted for in developing the projection. The third chapter of the thesis compiles a series of observations and philosophical considerations into a series of research questions. Some research questions are then answered with further thought, observation, and reading, leading to conjectures on the problem. The remainder require some form of experimentation, and so are used to formulate hypotheses. Falsifiability conditions are then generated from those hypotheses, and used to get the development of experiments to be performed, in this case on a computer upon various conditions of use of a genetic algorithm. The fourth chapter of the thesis walks through the formulation of a method to attack the problem of strategically choosing an architecture. This method is designed to

  20. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  1. Thrifty: An Exascale Architecture for Energy Proportional Computing

    Energy Technology Data Exchange (ETDEWEB)

    Torrellas, Josep [Univ. of Illinois, Champaign, IL (United States)

    2014-12-23

    The objective of this project is to design different aspects of a novel exascale architecture called Thrifty. Our goal is to focus on the challenges of power/energy efficiency, performance, and resiliency in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation). In this report, we focus on the progress at the University of Illinois during the last year of the grant (September 1, 2013 to August 31, 2014). We also point to the progress in the other collaborating institutions when needed.

  2. Architecture Research of Non-Stop Computer System

    Institute of Scientific and Technical Information of China (English)

    LIUXinsong; QIUYuanjie; YANGFeng; YANGongjun; GUPan; GAOKe

    2004-01-01

    Distributed & parallel server system with distributed & parallel I/O interface has solved the bottleneck between server system and client system, and also has solved the rebuilding problem after system fault. However, the system still has some shortcomings: the switch is the system bottleneck and the system is not adapted to WAN (Wide area network). Therefore, we put forward a new system architecture to overcome these shortcomings and develop the non-stop computer system. The basis of a non-stop system is rebuilt after system fault. The inner architecture of non-stop system must be redundant and the redundancy is the system fault-tolerance redundancy based on distributed mechanism and not backupredundancy. Analysis and test results declare that the system rebuild time after fault is in second scale and its rebuild capability is so strong that the system can be nonstop in the system's lifetime.

  3. Contagious architecture: computation, aesthetics, and space (technologies of lived abstraction)

    CERN Document Server

    Parisi, Luciana

    2013-01-01

    In Contagious Architecture, Luciana Parisi offers a philosophical inquiry into the status of the algorithm in architectural and interaction design. Her thesis is that algorithmic computation is not simply an abstract mathematical tool but constitutes a mode of thought in its own right, in that its operation extends into forms of abstraction that lie beyond direct human cognition and control. These include modes of infinity, contingency, and indeterminacy, as well as incomputable quantities underlying the iterative process of algorithmic processing. The main philosophical source for the project is Alfred North Whitehead, whose process philosophy is specifically designed to provide a vocabulary for "modes of thought" exhibiting various degrees of autonomy from human agency even as they are mobilized by it. Because algorithmic processing lies at the heart of the design practices now reshaping our world -- from the physical spaces of our built environment to the networked spaces of digital culture -- the nature o...

  4. Computation of Asteroid Proper Elements: Recent Advances

    Science.gov (United States)

    Knežević, Z.

    2017-06-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  5. Computational Swarming: A Cultural Technique for Generative Architecture

    Directory of Open Access Journals (Sweden)

    Sebastian Vehlken

    2014-11-01

    Full Text Available After a first wave of digital architecture in the 1990s, the last decade saw some approaches where agent-based modelling and simulation (ABM was used for generative strategies in architectural design. By taking advantage of the self-organisational capabilities of computational agent collectives whose global behaviour emerges from the local interaction of a large number of relatively simple individuals (as it does, for instance, in animal swarms, architects are able to understand buildings and urbanscapes in a novel way as complex spaces that are constituted by the movement of multiple material and informational elements. As a major, zoo-technological branch of ABM, Computational Swarm Intelligence (SI coalesces all kinds of architectural elements – materials, people, environmental forces, traffic dynamics, etc. – into a collective population. Thereby, SI and ABM initiate a shift from geometric or parametric planning to time-based and less prescriptive software tools.Agent-based applications of this sort are used to model solution strategies in a number of areas where opaque and complex problems present themselves – from epidemiology to logistics, and from market simulations to crowd control. This article seeks to conceptualise SI and ABM as a fundamental and novel cultural technique for governing dynamic processes, taking their employment in generative architectural design as a concrete example. In order to avoid a rather conventional application of philosophical theories to this field, the paper explores how the procedures of such technologies can be understood in relation to the media-historical concept of Cultural Techniques.

  6. Managing Security in Advanced Computational Infrastructure

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Proposed by Education Ministry of China, Advanced Computational Infrastructure (ACI) aims at sharing geographically distributed high-performance computing and huge-capacity data resource among the universities of China. With the fast development of large-scale applications in ACI, the security requirements become more and more urgent. The special security needs in ACI is first analyzed in this paper, and security management system based on ACI is presented. Finally, the realization of security management system is discussed.

  7. Advances and Challenges in Computational Plasma Science

    Energy Technology Data Exchange (ETDEWEB)

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  8. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  9. A Novel Architecture of Multi-GPU Computing Card

    Directory of Open Access Journals (Sweden)

    Sen Guo

    2013-08-01

    Full Text Available The data transmission between GPUS in the existing multi_GPU computing card is often through PCIE which is in relative low speed, so the PCIE has become bottleneck of Overall performance. A novel architecture of multi_GPU computing card have been proposed in this paper: A multi-channel memory which have multiple interfaces is added, including one common interface shared by different GPUs, which is connected with a FPGA arbitration circuit and several other interfaces connected with dedicated GPUs frame buffer independently, and this multi-channel memory is called "global shared memory". The result of a simulation of accelerating computer tomography algebraic reconstruction on multi-GPU demonstrates effectiveness of this approach.  

  10. Biomorphic Multi-Agent Architecture for Persistent Computing

    Science.gov (United States)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  11. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    Science.gov (United States)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  12. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Science.gov (United States)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  13. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    Science.gov (United States)

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  14. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    Science.gov (United States)

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  15. Advanced Algebra and Trigonometry: Supplemental Computer Units.

    Science.gov (United States)

    Dotseth, Karen

    A set of computer-oriented, supplemental activities is offered which can be used with a course in advanced algebra and trigonometry. The activities involve use of the BASIC programming language; it is assumed that the teacher is familiar with programming in BASIC. Students will learn some BASIC; however, the intent is not to develop proficient…

  16. Computational Intelligence Paradigms in Advanced Pattern Classification

    CERN Document Server

    Jain, Lakhmi

    2012-01-01

    This monograph presents selected areas of application of pattern recognition and classification approaches including handwriting recognition, medical image analysis and interpretation, development of cognitive systems for image computer understanding, moving object detection, advanced image filtration and intelligent multi-object labelling and classification. It is directed to the scientists, application engineers, professors, professors and students will find this book useful.

  17. An Architectural Design System Based on Computer Graphics.

    Science.gov (United States)

    MacDonald, Stephen L.; Wehrli, Robert

    The recent developments in computer hardware and software are presented to inform architects of this design tool. Technical advancements in equipment include--(1) cathode ray tube displays, (2) light pens, (3) print-out and photo copying attachments, (4) controls for comparison and selection of images, (5) chording keyboards, (6) plotters, and (7)…

  18. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    Science.gov (United States)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  19. ARCHITECTURE OF WEB BASED COMPUTER-AIDED MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    N. E. Filyukov

    2014-09-01

    Full Text Available The paper deals with design of a web-based system for Computer-Aided Manufacturing (CAM. Remote applications and databases located in the "private cloud" are proposed to be the basis of such system. The suggested approach contains: service - oriented architecture, using web applications and web services as modules, multi-agent technologies for implementation of information exchange functions between the components of the system and the usage of PDM - system for managing technology projects within the CAM. The proposed architecture involves CAM conversion into the corporate information system that will provide coordinated functioning of subsystems based on a common information space, as well as parallelize collective work on technology projects and be able to provide effective control of production planning. A system has been developed within this architecture which gives the possibility for a rather simple technological subsystems connect to the system and implementation of their interaction. The system makes it possible to produce CAM configuration for a particular company on the set of developed subsystems and databases specifying appropriate access rights for employees of the company. The proposed approach simplifies maintenance of software and information support for CAM subsystems due to their central location in the data center. The results can be used as a basis for CAM design and testing within the learning process for development and modernization of the system algorithms, and then can be tested in the extended enterprise.

  20. Modern hardware architectures accelerate porous media flow computations

    Science.gov (United States)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  1. Technology advances and market forces: Their impact on high performance architectures

    Science.gov (United States)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  2. Computing Architecture of the ALICE Detector Control System

    CERN Document Server

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S

    2011-01-01

    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  3. An ATLAS distributed computing architecture for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for LHC Run-4. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prot...

  4. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

  5. Cooperative Computing Techniques for a Deeply Fused and Heterogeneous Many-Core Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    郑方; 李宏亮; 吕晖; 过锋; 许晓红; 谢向辉

    2015-01-01

    Due to advances in semiconductor techniques, many-core processors have been widely used in high performance computing. However, many applications still cannot be carried out efficiently due to the memory wall, which has become a bottleneck in many-core processors. In this paper, we present a novel heterogeneous many-core processor architecture named deeply fused many-core (DFMC) for high performance computing systems. DFMC integrates management processing ele-ments (MPEs) and computing processing elements (CPEs), which are heterogeneous processor cores for different application features with a unified ISA (instruction set architecture), a unified execution model, and share-memory that supports cache coherence. The DFMC processor can alleviate the memory wall problem by combining a series of cooperative computing techniques of CPEs, such as multi-pattern data stream transfer, efficient register-level communication mechanism, and fast hardware synchronization technique. These techniques are able to improve on-chip data reuse and optimize memory access performance. This paper illustrates an implementation of a full system prototype based on FPGA with four MPEs and 256 CPEs. Our experimental results show that the effect of the cooperative computing techniques of CPEs is significant, with DGEMM (double-precision matrix multiplication) achieving an efficiency of 94%, FFT (fast Fourier transform) obtaining a performance of 207 GFLOPS and FDTD (finite-difference time-domain) obtaining a performance of 27 GFLOPS.

  6. The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.

    Science.gov (United States)

    1986-03-01

    8 )1; II;-21(82 ? 1JV , *~A 1*-e eo I Q .f ’ . - . - . .> - Approved for public release; distribution is unlimited. The RISC Architecture and Computer...1000 Lisboa Portugal 6. Manuel Pedrosa de Barros 4 Celula 5 Bloco 5 Lote D, 3 Direito 2795 Linda-a-Velha Portugal t~m " 96" ..... ...... |f

  7. Moon-Based Advanced Reusable Transportation Architecture: The MARTA Project

    Science.gov (United States)

    Alexander, R.; Bechtel, R.; Chen, T.; Cormier, T.; Kalaver, S.; Kirtas, M.; Lewe, J.-H.; Marcus, L.; Marshall, D.; Medlin, M.; McIntire, J.; Nelson, D.; Remolina, D.; Scott, A.; Weglian, J.; Olds, J.

    2000-01-01

    The Moon-based Advanced Reusable Transportation Architecture (MARTA) Project conducted an in-depth investigation of possible Low Earth Orbit (LEO) to lunar surface transportation systems capable of sending both astronauts and large masses of cargo to the Moon and back. This investigation was conducted from the perspective of a private company operating the transportation system for a profit. The goal of this company was to provide an Internal Rate of Return (IRR) of 25% to its shareholders. The technical aspect of the study began with a wide open design space that included nuclear rockets and tether systems as possible propulsion systems. Based on technical, political, and business considerations, the architecture was quickly narrowed down to a traditional chemical rocket using liquid oxygen and liquid hydrogen. However, three additional technologies were identified for further investigation: aerobraking, in-situ resource utilization (ISRU), and a mass driver on the lunar surface. These three technologies were identified because they reduce the mass of propellant used. Operational costs are the largest expense with propellant cost the largest contributor. ISRU, the production of materials using resources on the Moon, was considered because an Earth to Orbit (ETO) launch cost of 1600 per kilogram made taking propellant from the Earth's surface an expensive proposition. The use of an aerobrake to circularize the orbit of a vehicle coming from the Moon towards Earth eliminated 3, 100 meters per second of velocity change (Delta V), eliminating almost 30% of the 11,200 m/s required for one complete round trip. The use of a mass driver on the lunar surface, in conjunction with an ISRU production facility, would reduce the amount of propellant required by eliminating using propellant to take additional propellant from the lunar surface to Low Lunar Orbit (LLO). However, developing and operating such a system required further study to identify if it was cost effective. The

  8. Proceedings: 1989 conference on advanced computer technology for the power industry

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, B. (ed.)

    1990-07-01

    An EPRI conference to address advanced computer technology was hosted by Arizona Public Service in Scottsdale, Arizona, December 4--6, 1989. Participants represented US and foreign utilities, major electric and computer industry vendors, R D contractors, and consulting firms. These proceedings contain the text of the technical presentations and summaries of the panel discussions. The conference objectives were: to assess modern computer technologies and how they will affect utility operations; to share US and foreign utility experiences in developing computer-based technical products; and to discuss research conducted by EPRI in advanced computer technology on behalf of its utility members. Technical presentations addressed a broad range of computer-related topics: computer-based training; engineering workshops; hypermeida and other advanced user interfaces; networks and communications; expert systems and other decision-support methodologies; intelligent database management; supercomputing architectures and applications; real-time data processing; computerized technology and information transfer; and neural networks and other emerging technologies.

  9. Advanced quality prediction model for software architectural knowledge sharing

    NARCIS (Netherlands)

    Liang, Peng; Jansen, Anton; Avgeriou, Paris; Tang, Antony; Xu, Lai

    2011-01-01

    In the field of software architecture, a paradigm shift is occurring from describing the outcome of architecting process to describing the Architectural Knowledge (AK) created and used during architecting. Many AK models have been defined to represent domain concepts and their relationships, and the

  10. Advances in computers improving the web

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 78 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today.Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that continue to be of significant, lasting value i

  11. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    Science.gov (United States)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  12. An Overview of the Most Important Reference Architectures for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Razvan Daniel ZOTA

    2014-01-01

    Full Text Available In this paper we have presented the main characteristics of the most important reference archi-tectures designed for the cloud computing environment. Specifically, we have introduced the proposed architectures of the worldwide cloud computing companies like Cisco, IBM and VMware and we also had a look at the National Institute of Standards and Technology (NIST reference architecture which is the starting point for all proposed architectures in the field. As one would expect, the provider dependent reference architectures are written is such a way to suit the services and products of the company, while NIST’s architecture is a more general model with more comprehensive architectural details that we highlighted in this article. In the end of the article we draw out some conclusions regarding the existing reference architectures for cloud computing.

  13. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  14. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  15. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    Science.gov (United States)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology

  16. Scalable Quantum Computing Architecture with Mixed Species Ion Chains

    CERN Document Server

    Wright, John; Chou, Chen-Kuan; Graham, Richard D; Noel, Thomas W; Sakrejda, Tomasz; Zhou, Zichao; Blinov, Boris B

    2014-01-01

    We report on progress towards implementing mixed ion species quantum information processing for a scalable ion trap architecture. Mixed species chains may help solve several problems with scaling ion trap quantum computation to large numbers of qubits. Initial temperature measurements of linear Coulomb crystals containing barium and ytterbium ions indicate that the mass difference does not significantly impede cooling at low ion numbers. Average motional occupation numbers are estimated to be $\\bar{n} \\approx 130$ quanta per mode for chains with small numbers of ions, which is within a factor of three of the Doppler limit for barium ions in our trap. We also discuss generation of ion-photon entanglement with barium ions with a fidelity of $F \\ge 0.84$, which is an initial step towards remote ion-ion coupling in a more scalable quantum information architecture. Further, we are working to implement these techniques in surface traps in order to exercise greater control over ion chain ordering and positioning.

  17. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  18. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  19. On Computational Fluid Dynamics Tools in Architectural Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Hougaard, Mads; Stærdahl, Jesper Winther

    engineering computational fluid dynamics (CFD) simulation program ANSYS CFX and a CFD based representative program RealFlow are investigated. These two programs represent two types of CFD based tools available for use during phases of an architectural design process. However, as outlined in two case studies...... the durability of the two program types for simulation of flow is strongly depended of the purpose. One case presents results obtained with the programs with respect to the accuracy and physical behaviour of the flow. Another case deals with wind flow around a complex building design, the roof of the new Utzon...... Centre in Aalborg, Denmark. The obtained results show that detailed and accurate flow predictions can be obtained using a simulation tool like ANSYS CFX. On the other hand RealFlow provides satisfactory flow results for evaluation of a proposed building shape in an early phase of a design process...

  20. Advanced Energy Conversion Technologies and Architectures for Earth and Beyond

    Science.gov (United States)

    Howell, Joe T.; Fikes, John C.; Phillips, Dane J.; Laycock, Rustin L.; ONeill, Mark; Henley, Mark W.; Fork, Richard L.

    2006-01-01

    Research, development and studies of novel space-based solar power systems, technologies and architectures for Earth and beyond are needed to reduce the cost of clean electrical power for terrestrial use and to provide a stepping stone for providing an abundance of power in space, i.e., manufacturing facilities, tourist facilities, delivery of power between objects in space, and between space and surface sites. The architectures, technologies and systems needed for space to Earth applications may also be used for in-space applications. Advances in key technologies, i.e., power generation, power management and distribution, power beaming and conversion of beamed power are needed to achieve the objectives of both terrestrial and extraterrestrial applications. There is a need to produce "proof-ofconcept" validation of critical WPT technologies for both the near-term, as well as far-term applications. Investments may be harvested in near-term beam safe demonstrations of commercial WPT applications. Receiving sites (users) include ground-based stations for terrestrial electrical power, orbital sites to provide power for satellites and other platforms, future space elevator systems, space vehicle propulsion, and space surface sites. Space surface receiving sites of particular interest include the areas of permanent shadow near the moon s North and South poles, where WPT technologies could enable access to ice and other useful resources for human exploration. This paper discusses work addressing a promising approach to solar power generation and beamed power conversion. The approach is based on a unique high-power solar concentrator array called Stretched Lens Array (SLA) applied to both solar power generation and beamed power conversion. Since both versions (solar and laser) of SLA use many identical components (only the photovoltaic cells need to be different), economies of manufacturing and scale may be realized by using SLA on both ends of the laser power beaming

  1. Final Report: Super Instruction Architecture for Scalable Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, Beverly Ann [University of Florida; Bartlett, Rodney [University of Florida; Deumens, Erik [University of Florida

    2013-12-23

    The most advanced methods for reliable and accurate computation of the electronic structure of molecular and nano systems are the coupled-cluster techniques. These high-accuracy methods help us to understand, for example, how biological enzymes operate and contribute to the design of new organic explosives. The ACES III software provides a modern, high-performance implementation of these methods optimized for high performance parallel computer systems, ranging from small clusters typical in individual research groups, through larger clusters available in campus and regional computer centers, all the way to high-end petascale systems at national labs, including exploiting GPUs if available. This project enhanced the ACESIII software package and used it to study interesting scientific problems.

  2. Advancing Architecture-Centric Practices in US Army Acquisition

    Science.gov (United States)

    2010-04-27

    Pittsburgh, PA 15213 Stephen Blanchette, Jr. & John Bergey 27 April 2010 © 2010 Carnegie Mellon University Report Documentation Page Form ApprovedOMB...FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. 2 Architecture-Centric Army Acquisition Blanchette & Bergey , 27 April 2010 © 2010 Carnegie...Mellon University DoD Systems are Increasingly Complex… 3 Architecture-Centric Army Acquisition Blanchette & Bergey , 27 April 2010 © 2010 Carnegie Mellon

  3. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-30

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEM and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.

  4. The impact of advances in computer technology on particle transport Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Martin, W.R. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering; Rathkopf, J.A. [Lawrence Livermore National Lab., CA (United States); Brown, F.B. [Knolls Atomic Power Lab., Schenectady, NY (United States)

    1992-01-21

    Advances in computer technology, including hardware, architectural, and software advances, have led to dramatic gains in computer performance over the past decade. We summarize these performance trends and discuss the extent to which particle transport Monte Carlo codes have been able to take advantage of these performance gains. We consider MIMD, SIMD, and parallel distributed computer configurations for particle transport Monte Carlo applications. Some specific experience with vectorization and parallelization of production Monte Carlo codes is included. The topic of parallel random number generation is discussed in some detail. Finally, some software issues that hinder the implementation of Monte Carlo methods on parallel processors are addressed.

  5. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    Directory of Open Access Journals (Sweden)

    Lee Mike Myung-Ok

    2006-01-01

    Full Text Available This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch through an indium bump interconnection array (IBIA. The configurable array processor (CAP is an array of heterogeneous processing elements (PEs, while the intelligent configurable switch (ICS comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  6. Earth Science Computational Architecture for Multi-disciplinary Investigations

    Science.gov (United States)

    Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.

    2005-12-01

    Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with

  7. E-Governance and Service Oriented Computing Architecture Model

    Science.gov (United States)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  8. Computational Design of Advanced Nuclear Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Savrasov, Sergey [Univ. of California, Davis, CA (United States); Kotliar, Gabriel [Rutgers Univ., Piscataway, NJ (United States); Haule, Kristjan [Rutgers Univ., Piscataway, NJ (United States)

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  9. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  10. International Conference on Computers and Advanced Technology in Education

    CERN Document Server

    Advanced Information Technology in Education

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computers and Advanced Technology in Education. With the development of computers and advanced technology, the human social activities are changing basically. Education, especially the education reforms in different countries, has been experiencing the great help from the computers and advanced technology. Generally speaking, education is a field which needs more information, while the computers, advanced technology and internet are a good information provider. Also, with the aid of the computer and advanced technology, persons can make the education an effective combination. Therefore, computers and advanced technology should be regarded as an important media in the modern education. Volume Advanced Information Technology in Education is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of computers and advanced technology in education to d...

  11. Leveraging software architectures to guide and verify the development of sense/compute/control applications

    DEFF Research Database (Denmark)

    Cassou, Damien; Balland, Emilie; Consel, Charles;

    2011-01-01

    A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control-f...... verifications. We instantiate our approach in an architecture description language for Sense/Compute/Control applications, and describe associated compilation and verification strategies....

  12. A Client-Server Architecture for an Instructional Environment Based on Computer Networks and the Internet.

    Science.gov (United States)

    Guidon, Jacques; Pierre, Samuel

    1996-01-01

    Discusses the use of computers in education and training and proposes a client-server architecture for an experimental computer environment as an approach to a virtual classroom. Highlights include the World Wide Web and client software, document delivery, hardware architecture, and Internet resources and services. (Author/LRW)

  13. Specification, Design, and Analysis of Advanced HUMS Architectures

    Science.gov (United States)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  14. Specification, Design, and Analysis of Advanced HUMS Architectures

    Science.gov (United States)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  15. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  16. Applying a cloud computing approach to storage architectures for spacecraft

    Science.gov (United States)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  17. Computational Analysis to Factor Wind into the Design of an Architectural Environment

    Directory of Open Access Journals (Sweden)

    Hassam Nasarullah Chaudhry

    2015-01-01

    Full Text Available The effect of wind distribution on the architectural domain of the Bahrain Trade Centre was numerically analysed using computational fluid dynamics (CFD. Using the numerical data, the power generation potential of the building-integrated wind turbines was determined in response to the prevailing wind direction. The three-dimensional Reynolds-averaged Navier-Stokes (RANS equations along with the momentum and continuity equations were solved for obtaining the velocity and pressure field. Simulating a reference wind speed of 6 m/s, the findings from the study quantified an estimate power generation of 6.4 kW indicating a capacity factor of 2.9% for the benchmark model. At the windward side of the building, it was observed that the layers of turbulence intensified in inverse proportion to the height of the building with an average value of 0.45 J/kg. The air velocity was found to gradually increase in direct proportion to the elevation with the turbine located at higher altitude receiving maximum exposure to incoming wind. This work highlighted the potential of using advanced computational fluid dynamics in order to factor wind into the design of any architectural environment.

  18. Advances in structural mechanics of Chinese ancient architectures

    Institute of Scientific and Technical Information of China (English)

    Maohong YU; Yoshiya ODA; Dongping FANG; Junhai ZHAO

    2008-01-01

    Chinese ancient architectures are valuable heritage of ancient culture of China. Many historical building have been preserved up to now. The researches on the structural mechanics of ancient architectures show the different aspects of structure and mechanics. Systematical studies on the structural mechanics of ancient architectures have been carried out at Xi'an Jiaotong University since 1982. It is related with the need of repair of some national preservation relics in Xi'an. These studies include: 1) Ancient wooden structures including three national preservation relics Arrow Tower at North City Gate, City Tower at East City Gate, and Baogao Temple in Ningbao, Zhejiang province. 2) Ancient tall masonry building, the Big Goose Pagoda and Small Goose Pagoda in Xi'an. 3) Mechanical characteristics of ancient soil under foundation and city wall; the influence of caves in and under the ancient City Wall on the stability of the wall. 4) The typical Chinese ancient building at the center of city: the Bell Tower and Drum tower. 5) The behavior of Dou-Gong and Joggle joint of Chinese ancient wooden structure. 6) The mechanical behavior of ancient soils under complex stress state. A new systematical strength theory, the unified strength theory, is used to analyze the stability of ancient city wall in Xi'an and foundation of tall pagoda built in Tang dynasty. These researches also concern differential settlements of Arrow Tower and resistance to earthquake of these historical architecture heritages. Some other studies are also introduced. This paper gives a summary of these researches. Preservation and research are nowadays an essential requirement for the famous monuments, buildings, towers and others. Our society is more and more conscious of this necessity, which involves increasing activities of restoration, and then sometimes also of repair, mechanical strengthening and seismic retrofitting. Many historical buildings have in fact problems of structural strength and

  19. Advanced proton imaging in computed tomography

    CERN Document Server

    Mattiazzo, S; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-01-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, ...

  20. Advanced Scientific Computing Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  1. Optical design and characterization of an advanced computational imaging system

    Science.gov (United States)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  2. Advances in network systems architectures, security, and applications

    CERN Document Server

    Awad, Ali; Furtak, Janusz; Legierski, Jarosław

    2017-01-01

    This book provides the reader with a comprehensive selection of cutting–edge algorithms, technologies, and applications. The volume offers new insights into a range of fundamentally important topics in network architectures, network security, and network applications. It serves as a reference for researchers and practitioners by featuring research contributions exemplifying research done in the field of network systems. In addition, the book highlights several key topics in both theoretical and practical aspects of networking. These include wireless sensor networks, performance of TCP connections in mobile networks, photonic data transport networks, security policies, credentials management, data encryption for network transmission, risk management, live TV services, and multicore energy harvesting in distributed systems. .

  3. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    Science.gov (United States)

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  4. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    Science.gov (United States)

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  5. OPENING REMARKS: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  6. Research on Computer Aided Design (CAD) Technique and Novel Pattern for Electronic Architectural Drawing and Perspective

    Institute of Scientific and Technical Information of China (English)

    Tian XiJiang

    2015-01-01

    With the progress of computer technology, CAD technique is urgent needed. In this paper, we conduct numerical and theoretical analysis and research on computer aided design technique and novel pattern for electronic architectural drawing and perspective. AUTOCAD drawing soRware because of its versatility and easy entry, in engineering has broad user base, but also because of the basis of general drawing software, often cannot directly use of the existing order efficiently complete professional drawing. We modify the current pattern and introduce our proposed pattern for advancement. The experiment proves the effectiveness of the pattern. In addition, we will conduct more insightful research in the future to nolish the current anoroach.

  7. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    CERN Document Server

    Nam, H; Nazarewicz, W; Bulgac, A; Hagen, G; Kortelainen, M; Maris, P; Pei, J C; Roche, K J; Schunck, N; Thompson, I; Vary, J P; Wild, S M

    2012-01-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multidisciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadershi...

  8. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    Science.gov (United States)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  9. Advances in tomography: probing the molecular architecture of cells.

    Science.gov (United States)

    Fridman, Karen; Mader, Asaf; Zwerger, Monika; Elia, Natalie; Medalia, Ohad

    2012-11-01

    Visualizing the dynamic molecular architecture of cells is instrumental for answering fundamental questions in cellular and structural biology. Although modern microscopy techniques, including fluorescence and conventional electron microscopy, have allowed us to gain insights into the molecular organization of cells, they are limited in their ability to visualize multicomponent complexes in their native environment. Cryo-electron tomography (cryo-ET) allows cells, and the macromolecular assemblies contained within, to be reconstructed in situ, at a resolution of 2-6 nm. By combining cryo-ET with super-resolution fluorescence microscopy approaches, it should be possible to localize proteins with high precision inside cells and so elucidate a more realistic view of cellular processes. Thus, cryo-ET may bridge the resolution gap between cellular and structural biology.

  10. The Architecture for Computer Game‘s Engine

    OpenAIRE

    Kaulakis, Jonas

    2006-01-01

    Game engine is a set of supporting tools and services for game development. It is a component designed for reuse in different games. Therefore it is very important for game engine to be designed properly as for any successfully used reusable component. The main objective in this research is to present flexible and easily extensible architectural solution suitable for the game engine, based on the analysis of today’s game engine context and existing software architecture design. During ...

  11. The Architecture for Computer Game‘s Engine

    OpenAIRE

    Kaulakis, Jonas

    2006-01-01

    Game engine is a set of supporting tools and services for game development. It is a component designed for reuse in different games. Therefore it is very important for game engine to be designed properly as for any successfully used reusable component. The main objective in this research is to present flexible and easily extensible architectural solution suitable for the game engine, based on the analysis of today’s game engine context and existing software architecture design. During ...

  12. Proceedings: Workshop on advanced mathematics and computer science for power systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Esselman, W.H.; Iveson, R.H. (Electric Power Research Inst., Palo Alto, CA (United States))

    1991-08-01

    The Mathematics and Computer Workshop on Power System Analysis was held February 21--22, 1989, in Palo Alto, California. The workshop was the first in a series sponsored by EPRI's Office of Exploratory Research as part of its effort to develop ways in which recent advances in mathematics and computer science can be applied to the problems of the electric utility industry. The purpose of this workshop was to identify research objectives in the field of advanced computational algorithms needed for the application of advanced parallel processing architecture to problems of power system control and operation. Approximately 35 participants heard six presentations on power flow problems, transient stability, power system control, electromagnetic transients, user-machine interfaces, and database management. In the discussions that followed, participants identified five areas warranting further investigation: system load flow analysis, transient power and voltage analysis, structural instability and bifurcation, control systems design, and proximity to instability. 63 refs.

  13. Advances in computational actinide chemistry in China

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dongqi; Wu, Jingyi; Chai, Zhifang [Chinese Academy of Sciences, Beijing (China). Multidisciplinary Initiative Center; Su, Jing [Chinese Academy of Sciences, Shanghai (China). Div. of Nuclear Materials Science and Engineering; Li, Jun [Tsinghua Univ., Beijing (China). Dept. of Chemistry and Laboratory of Organic Optoelectronics and Molecular Engineering

    2014-04-01

    The advances in computational actinide chemistry made in China are reviewed. Several areas relevant to chemistry of actinides in gas, liquid, and solid phases have been explored. However, we limit the scope to selected contributions in the chemistry of molecular actinide systems in gas and liquid phases. These studies may be classified into two categories: treatment of relativistic effects, which cover the development of two- and four-component Hamiltonians and the optimization of relativistic pseudopotentials, and the applications of theoretical methods in actinide chemistry. The applications include (1) the electronic structures of actinocene, noble gas complexes, An-C multiple bonding compounds, uranyl and its isoelectronic species, fluorides and oxides, molecular systems with metal-metal bonding in their isolated forms (U{sub 2}, Pu{sub 2}) and in fullerene (U{sub 2} rate at C{sub 60}), and the excited states of actinide complexes; (2) chemical reactions, including oxidation, hydrolysis of UF{sub 6}, ligand exchange, reactivities of thorium oxo and sulfido metallocenes, CO{sub 2}/CS{sub 2} functionalization promoted by trivalent uranium complex; and (3) migration of actinides in the environment. A future outlook is discussed. (orig.)

  14. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2016-06-08

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  15. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  16. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator.

  17. An Advanced Electrospinning Method of Fabricating Nanofibrous Patterned Architectures with Controlled Deposition and Desired Alignment

    Science.gov (United States)

    Rasel, Sheikh Md

    containing 0, 5, 10, and 20 wt % of fillers. Morphological analyses carried out by digital optical microscope, scanning electron microscopy, x-ray computed tomography, and Fourier transform infrared spectroscopy, confirmed the presence and well dispersion of fillers in the composites. In addition, improvement of mechanical properties with increased filler content further emphasized the adhesion between matrix and reinforcement. PVA with 20 wt % wollastonite composite exhibited the highest tensile strength (11.99 MPa) and tensile module (198 MPa) as compared to pure PVA (3.92 MPa and 83 MPa, respectively). Moreover, the thermal tests demonstrated that there is no major deviation in the thermal stability due to the addition of wollastonite in PVA scaffolds. Almost similar trend was observed in PVA/wood flour nanocomposites where tensile strength improved by 228 % for 20 wt % of reinforcement. The PVA/wollastonite and PVA/wood flour fibrous nanocomposite which poses higher mechanical properties might be potentially suitable for many advanced applications such as filtration, tissue engineering, and food processing. We believe this study will contribute to further scientific understanding of the patterning mechanism of electrospun nanofibers and to allow for variety of design of specific patterned nanofibrous architectures with desired functional properties. Therefore, this improved scheme of electrospinning can have significant impact in a broad range of applications including tissue engineering scaffolds, filtrations, and nanoelectronics.

  18. Architectures, Concepts and Technologies for Service Oriented Computing : Proceedings of the 2nd International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing - ACT4SOC 2008

    NARCIS (Netherlands)

    Sinderen, van Marten

    2008-01-01

    This volume contains the proceedings of the Second International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC 2008), held on July 5 in Porto, Portugal, in conjunction with the Third International Conference on Software and Data Technologies (ICSOFT 200

  19. An efficient FPGA architecture for integer ƞth root computation

    Science.gov (United States)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  20. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  1. Relay Architectures for 3GPP LTE-Advanced

    Directory of Open Access Journals (Sweden)

    Peters StevenW

    2009-01-01

    Full Text Available The Third Generation Partnership Project's Long Term Evolution-Advanced is considering relaying for cost-effective throughput enhancement and coverage extension. While analog repeaters have been used to enhance coverage in commercial cellular networks, the use of more sophisticated fixed relays is relatively new. The main challenge faced by relay deployments in cellular systems is overcoming the extra interference added by the presence of relays. Most prior work on relaying does not consider interference, however. This paper analyzes the performance of several emerging half-duplex relay strategies in interference-limited cellular systems: one-way, two-way, and shared relays. The performance of each strategy as a function of location, sectoring, and frequency reuse are compared with localized base station coordination. One-way relaying is shown to provide modest gains over single-hop cellular networks in some regimes. Shared relaying is shown to approach the gains of local base station coordination at reduced complexity, while two-way relaying further reduces complexity but only works well when the relay is close to the handset. Frequency reuse of one, where each sector uses the same spectrum, is shown to have the highest network throughput. Simulations with realistic channel models provide performance comparisons that reveal the importance of interference mitigation in multihop cellular networks.

  2. Broadband PLC for Clustered Advanced Metering Infrastructure (AMI Architecture

    Directory of Open Access Journals (Sweden)

    Augustine Ikpehai

    2016-07-01

    Full Text Available Advanced metering infrastructure (AMI subsystems monitor and control energy distribution through exchange of information between smart meters and utility networks. A key challenge is how to select a cost-effective communication system without compromising the performance of the applications. Current communication technologies were developed for conventional data networks with different requirements. It is therefore necessary to investigate how much of existing communication technologies can be retrofitted into the new energy infrastructure to cost-effectively deliver acceptable level of service. This paper investigates broadband power line communications (BPLC as a backhaul solution in AMI. By applying the disparate traffic characteristics of selected AMI applications, the network performance is evaluated. This study also examines the communication network response to changes in application configurations in terms of packet sizes. In each case, the network is stress-tested and performance is assessed against acceptable thresholds documented in the literature. Results show that, like every other communication technology, BPLC has certain limitations; however, with some modifications in the network topology, it indeed can fulfill most AMI traffic requirements for flexible and time-bounded applications. These opportunities, if tapped, can significantly improve fiscal and operational efficiencies in AMI services. Simulation results also reveal that BPLC as a backhaul can support flat and clustered AMI structures with cluster size ranging from 1 to 150 smart meters.

  3. Data center network architecture in cloud computing:review, taxonomy, and open research issues

    Institute of Scientific and Technical Information of China (English)

    Han QI; Muhammad SHIRAZ; Jie-yao LIU; Abdullah GANI; Zulkanain ABDUL RAHMAN; Torki AALTAMEEM

    2014-01-01

    The data center network (DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables the deployment of resources centralization and on-demand access of the information and services of data centers to users. In recent years, the scale of the DCN has constantly increased with the widespread use of cloud-based services and the unprecedented amount of data delivery in/between data centers, whereas the traditional DCN architecture lacks aggregate bandwidth, scalability, and cost effectiveness for coping with the increasing demands of tenants in accessing the services of cloud data centers. Therefore, the design of a novel DCN architecture with the features of scalability, low cost, robustness, and energy conservation is required. This paper reviews the recent research fi ndings and technologies of DCN architectures to identify the issues in the existing DCN architectures for cloud computing. We develop a taxonomy for the classifi cation of the current DCN architectures, and also qualitatively analyze the traditional and contemporary DCN architectures. Moreover, the DCN architectures are compared on the basis of the signifi cant characteristics, such as bandwidth, fault tolerance, scalability, overhead, and deployment cost. Finally, we put forward open research issues in the deployment of scalable, low-cost, robust, and energy-efficient DCN architecture, for data centers in computational clouds.

  4. Recent Advances in Computational Conformal Geometry

    OpenAIRE

    Gu, Xianfeng David; Luo, Feng; Yau, Shing-Tung

    2009-01-01

    Computational conformal geometry focuses on developing the computational methodologies on discrete surfaces to discover conformal geometric invariants. In this work, we briefly summarize the recent developments for methods and related applications in computational conformal geometry. There are two major approaches, holomorphic differentials and curvature flow. Holomorphic differential method is a linear method, which is more efficient and robust to triangulations with lower qua...

  5. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    be considered to be a system, that encompasses design decisions on both hardware and software levels - that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future...... Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  6. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    Science.gov (United States)

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  7. Stagnant Timing investigation of Embedded Software on Advanced Processor Architectures

    Directory of Open Access Journals (Sweden)

    M.Shankar

    2012-01-01

    Full Text Available Most processors today are embedded inproducts like mobile phones, microwave owns, weldingmachines etc and are not used in PC’s as many believeSince some of these embedded computers are used in time-critical or safety-critical systems it is very important thatthe behaviour of these systems are well known. One part ofthat is to know the Worst Case Execution Time (WCET ofthe different tasks in the embedded system. First,shortcomings in current as well as future standards tocontrolling the power grid are outlined. From theseeconomic and safety threats, we derive an immediate needto invest in research on the protection of the power grid,both from the perspective of cyber attacks and distributedcontrol system problems. Second, current software designpractice does not adequately verify and validate worst-casetiming scenarios that have to be guaranteed in order tomeet deadlines in safety-critical embedded systems. Thisequally applies to avionics and the automotive industry,both of which are increasingly requiring their suppliers toprovide variable bounds on worst-case execution time ofsoftware.

  8. A Green Enterprise Computing Architecture for Developing Countries

    OpenAIRE

    Akbar, Rabia; Azim, Tahir

    2016-01-01

    Developing countries often have access to limited energy resources, which frequently results in power cuts and failures. During these power cuts, enterprises rely on backup sources for power such as uninterruptible power supplies (UPS) and electric generators. This paper proposes AnywareDC, an architecture that builds on the recent work on Anyware to reduce energy utilization in the presence of such intermittent power supplies. Anyware reduces energy usage by providing enterprise users laptop...

  9. Computer simulation of sphenopsid architecture. I. Principles and methodology.

    Science.gov (United States)

    Daviero; Meyer-Berthaud; Lecoustre

    2000-04-01

    The modelling system AMAP 1 provides morphological models that reproduce the series of shapes developed in a plant structure during its growth. It is applicable to plants that have architectural features consistent with the principles introduced by Hallé et al. (Hallé, F., Oldeman, R.A.A., Tomlinson, P.B., 1978. Tropical Trees and Forest. Springer, Berlin, 441 pp.). We present the main principles of the methodology including the use of an architectural template and the statistical processing of the data collected on sample plants and a description of its components and parameters. We use models of Equisetum telmateia aerial shoots as examples of adaptations of this methodology to plants represented by a limited number of specimens. The main features of this approach that make it especially relevant for modelling incomplete and fragmented fossil plants include the use of architectural templates constructed by adding discrete morphological entities limited to a number of axial components as follows: as many branch orders as are identified in the sample plants, a single extension unit per branch order, and its internodes. This approach is viewed as a means to provide visual representations of plants at different ontogenetical stages, expressing our current knowledge of their growth and branching strategies, and of the parameters that control their geometries.

  10. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  11. Computing support for advanced medical data analysis and imaging

    CERN Document Server

    Wiślicki, W; Białas, P; Czerwiński, E; Kapłon, Ł; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemień, W; Molenda, M; Moskal, P; Niedźwiecki, S; Pałka, M; Pawlik, M; Raczyński, L; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; Słomski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zieliński, M; Zoń, N

    2014-01-01

    We discuss computing issues for data analysis and image reconstruction of PET-TOF medical scanner or other medical scanning devices producing large volumes of data. Service architecture based on the grid and cloud concepts for distributed processing is proposed and critically discussed.

  12. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  13. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    Science.gov (United States)

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  14. A Survey and Evaluation of Simulators Suitable for Teaching Courses in Computer Architecture and Organization

    Science.gov (United States)

    Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.

    2009-01-01

    Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…

  15. Analysis of Introducing Active Learning Methodologies in a Basic Computer Architecture Course

    Science.gov (United States)

    Arbelaitz, Olatz; José I. Martín; Muguerza, Javier

    2015-01-01

    This paper presents an analysis of introducing active methodologies in the Computer Architecture course taught in the second year of the Computer Engineering Bachelor's degree program at the University of the Basque Country (UPV/EHU), Spain. The paper reports the experience from three academic years, 2011-2012, 2012-2013, and 2013-2014, in which…

  16. Leveraging Software Architectures to Guide and Verify the Development of Sense/Compute/Control Applications

    CERN Document Server

    Cassou, Damien; Consel, Charles; Lawall, Julia

    2011-01-01

    A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture's description of the data and control-flow interactions between components. The characterization of these interactions can be rather abstract or very concrete, providing more or less implementation guidance, programming support, and static verification. In this paper, we explore one point in the design space between abstract and concrete component interaction specifications. We introduce a notion of behavioral contract that expresses the set of allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various verifications. We instantiate our approach in an architecture description language for the doma...

  17. Advancing crime scene computer forensics techniques

    Science.gov (United States)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  18. Monte Carlo simulations on SIMD computer architectures. [Single instruction multiple data (SIMD)

    Energy Technology Data Exchange (ETDEWEB)

    Burmester, C.P.; Gronsky, R. (Lawrence Berkeley Lab., CA (United States)); Wille, L.T. (Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics)

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  19. Proceedings: 1989 conference on advanced computer technology for the power industry

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, B. (ed.)

    1990-07-01

    An EPRI conference to address advanced computer technology was hosted by Arizona Public Service in Scottsdale, Arizona, December 4--6, 1989. Participants represented US and foreign utilities, major electric and computer industry vendors, R D contractors, and consulting firms. These Proceedings contain the text of the technical presentations and summaries of the panel discussions. The conference objectives were: to asses modern computer technologies and how they will effect utility operations; to share US and foreign utility experiences in developing computer-based technical products; and to discuss research conducted by EPRI in advanced computer technology on behalf of its utility members. Technical presentations addressed a broad range of computer-related topics: Computer-Based Training, Engineering Workstations, Hypermedia and Other Advanced User Interfaces, Networks and Communications, Expert Systems and Other Decision-Support Methodologies, Intelligent Database Management, Supercomputing Architectures and Applications, Real-Time Data Processing, Computerized Technology and Information Transfer, and Neural Networks and Other Emerging Technologies. In addition, two panel sessions were conducted to provide a forum for utilities to discuss past and future directions of EPRI software, and the future role of engineering workstations in utilities. The results of these two panels are summarized in this paper.

  20. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  1. Implementation of a blade element UH-60 helicopter simulation on a parallel computer architecture in real-time

    Science.gov (United States)

    Moxon, Bruce C.; Green, John A.

    1990-01-01

    A high-performance platform for development of real-time helicopter flight simulations based on a simulation development and analysis platform combining a parallel simulation development and analysis environment with a scalable multiprocessor computer system is described. Simulation functional decomposition is covered, including the sequencing and data dependency of simulation modules and simulation functional mapping to multiple processors. The multiprocessor-based implementation of a blade-element simulation of the UH-60 helicopter is presented, and a prototype developed for a TC2000 computer is generalized in order to arrive at a portable multiprocessor software architecture. It is pointed out that the proposed approach coupled with a pilot's station creates a setting in which simulation engineers, computer scientists, and pilots can work together in the design and evaluation of advanced real-time helicopter simulations.

  2. A SECURE MESSAGE TRANSMISSION SYSTEM ARCHITECTURE FOR COMPUTER NETWORKS EMPLOYING SMART CARDS

    Directory of Open Access Journals (Sweden)

    Geylani KARDAŞ

    2008-01-01

    Full Text Available In this study, we introduce a mobile system architecture which employs smart cards for secure message transmission in computer networks. The use of smart card provides two security services as authentication and confidentiality in our design. The security of the system is provided by asymmetric encryption. Hence, smart cards are used to store personal account information as well as private key of each user for encryption / decryption operations. This offers further security, authentication and mobility to the system architecture. A real implementation of the proposed architecture which utilizes the JavaCard technology is also discussed in this study.

  3. Advanced Design and Implementation of a Control Architecture for Long Range Autonomous Planetary Rovers

    Science.gov (United States)

    Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.

    1999-01-01

    An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.

  4. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  5. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    Science.gov (United States)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  6. Science-driven system architecture: A new process for leadership class computing

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-10-19

    Over the past several years, computational scientists have observed a frustrating trend of stagnating application performance despite dramatic increases in peak performance of high performance computers. In 2002, researchers at Lawrence Berkeley National Laboratory, Argonne National Laboratory, and IBM proposed a new process to reverse this situation [1]. This strategy is based on new types of development partnerships with computer vendors based on the concept of science-driven computer system design. This strategy will engage applications scientists well before an architecture is available for commercialization. The process is already producing results, and has further potential for dramatically improving system efficiency. This paper documents the progress to date and the potential for future benefits. An example of this process is discussed, using IBM Power architecture with a computer architecture design that can lead to a sustained performance of 50 to 100 Tflo p/s on a broad spectrum of applications in 2006 for a reasonable cost. This partnership will establish a collaborative approach to modifying computer architecture to enable heretofore unrealized achievements in computer capability-limited fields such as nanoscience, combustion modeling, fusion, climate modeling, and astrophysics.

  7. SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    CERN Document Server

    Buyya, Rajkumar; Calheiros, Rodrigo N

    2012-01-01

    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible alloc...

  8. 浅谈CUDA并行计算体系%The CUDA Parallel Computing Architecture

    Institute of Scientific and Technical Information of China (English)

    叶毅嘉

    2015-01-01

    In recent years, the rapid development of graphics processor unit(GPU) makes it progressively used for general-purpose computing. There are various platforms for parallel computing, for example, compute uniifed device architecture (CUDA) designed by NVIDIA is widely due to the strong computing power of GPU (graphic processing unit) for realizing general parallel computing.%近年来,图形处理器(Graphic Process Unit,GPU)的快速发展使得其逐步用于通用计算。在性能各异的并行计算平台中,英伟达(NVIDIA)公司推出的计算统一设备架构(Compute Unified Device Architecture,CUDA)因为充分利用GPU (Graphic Processing Unit)强大的计算能力实现了通用并行计算而受到研究者们的青睐。

  9. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  10. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  11. Preliminary design and implementation of the baseline digital baseband architecture for advanced deep space transponders

    Science.gov (United States)

    Nguyen, T. M.; Yeh, H.-G.

    1993-01-01

    The baseline design and implementation of the digital baseband architecture for advanced deep space transponders is investigated and identified. Trade studies on the selection of the number of bits for the analog-to-digital converter (ADC) and optimum sampling schemes are presented. In addition, the proposed optimum sampling scheme is analyzed in detail. Descriptions of possible implementations for the digital baseband (or digital front end) and digital phase-locked loop (DPLL) for carrier tracking are also described.

  12. Advanced Scientific Computing Environment Team new scientific database management task

    Energy Technology Data Exchange (ETDEWEB)

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  13. Electronic Service Architecture Model Assessment of Conformity to Cloud Computing Key Features

    OpenAIRE

    Stipravietis, P; Žeiris, E; Ziema, M

    2013-01-01

    The research examines electronic service execution possibilities in cloud computing environment and the key features of cloud computing. It also offers a method which allows quantitatively assess the conformity of existing e-service architecture model to cloud computing key features.The method allows evaluating the amount of necessary transformations and their efficiency. The offered solution is verified using the business process administered by Motor Insurance Bureau...

  14. Computing Algorithms for Nuffield Advanced Physics.

    Science.gov (United States)

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  15. Laboratory Works Designed for Developing Student Motivation in Computer Architecture

    OpenAIRE

    Petre Ogrutan; Lia Elena Aciu

    2017-01-01

    In light of the current difficulties related to maintaining the students’ interest and to stimulate their motivation for learning, the authors have developed a range of new laboratory exercises intended for first-year students in Computer Science as well as for engineering students after completion of at least one course in computers. The educational goal of the herein proposed laboratory exercises is to enhance the students’ motivation and creative thinking by organizing a relaxed yet compet...

  16. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    Directory of Open Access Journals (Sweden)

    Cesar Torres-Huitzil

    2013-01-01

    Full Text Available Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k×k kernel requires of k2−1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA devices. Implementation results show that the architecture is able to compute max/min filters, on 1024×1024 images with up to 255×255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding.

  17. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    Science.gov (United States)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  18. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    Science.gov (United States)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-01-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance. PMID:28338069

  19. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware/software...... codesign implementations of Hybris for standard-cell based ASIC (simulated) and FPGA technologies have been demonstrated, using manual co-synthesis for translation of a Virtual Prototyping architecture specification written in C into both optimized C source for software and into to a synthesizable VHDL...... specification for hardware implementation. A flexible VRML 97 3D scene graph engine with a Java interface and C++ interface has been implemented to allow flexible integration of the rendering technology into Java and C++ applications. A 3D medical visualization workstation prototype (3D-Med) is examined...

  20. Implicit Unstructured Computational Aerodynamics on Many-Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2014-05-04

    This research aims to understand the performance of PETSc-FUN3D, a fully nonlinear implicit unstructured grid incompressible or compressible Euler code with origins at NASA and the U.S. DOE, on many-integrated core architecture and how a hybridprogramming paradigm (MPI+OpenMP) can exploit Intel Xeon Phi hardware with upwards of 60 cores per node and 4 threads per core. For the current contribution, we focus on strong scaling with many-integrated core hardware. In most implicit PDE-based codes, while the linear algebraic kernel is limited by the bottleneck of memory bandwidth, the flux kernel arising in control volume discretization of the conservation law residuals and the preconditioner for the Jacobian exploits the Phi hardware well.

  1. CISM-IUTAM School on Advanced Turbulent Flow Computations

    CERN Document Server

    Krause, Egon

    2000-01-01

    This book collects the lecture notes concerning the IUTAM School on Advanced Turbulent Flow Computations held at CISM in Udine September 7–11, 1998. The course was intended for scientists, engineers and post-graduate students interested in the application of advanced numerical techniques for simulating turbulent flows. The topic comprises two closely connected main subjects: modelling and computation, mesh pionts necessary to simulate complex turbulent flow.

  2. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  3. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  4. Impact of 5-h phase advance on sleep architecture and physical performance in athletes.

    Science.gov (United States)

    Petit, Elisabeth; Mougin, Fabienne; Bourdin, Hubert; Tio, Grégory; Haffen, Emmanuel

    2014-11-01

    Travel across time zones causes jet lag and is accompanied by deleterious effects on sleep and performance in athletes. These poor performances have been evaluated in field studies but not in laboratory conditions. The purpose of this study was to evaluate, in athletes, the impact of 5-h phase advance on the architecture of sleep and physical performances (Wingate test). In a sleep laboratory, 16 male athletes (age: 22.2 ± 1.7 years, height: 178.3 ± 5.6 cm, body mass: 73.6 ± 7.9 kg) spent 1 night in baseline condition and 2 nights, 1 week apart, in phase shift condition recorded by electroencephalography to calculate sleep architecture variables. For these last 2 nights, the clock was advanced by 5 h. Core body temperature rhythm was assessed continuously. The first night with phase advance decreased total sleep time, sleep efficiency, sleep onset latency, stage 2 of nonrapid eye movement (N2), and rapid eye movement (REM) sleep compared with baseline condition, whereas the second night decreased N2 and increased slow-wave sleep and REM, thus improving the quality of sleep. After phase advance, mean power improved, which resulted in higher lactatemia. Acrophase and bathyphase of temperature occurred earlier and amplitude decreased in phase advance but the period was not modified. These results suggest that a simulated phase shift contributed to the changes in sleep architecture, but did not significantly impair physical performances in relation with early phase adjustment of temperature to the new local time.

  5. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  6. Laboratory Works Designed for Developing Student Motivation in Computer Architecture

    Directory of Open Access Journals (Sweden)

    Petre Ogrutan

    2017-02-01

    Full Text Available In light of the current difficulties related to maintaining the students’ interest and to stimulate their motivation for learning, the authors have developed a range of new laboratory exercises intended for first-year students in Computer Science as well as for engineering students after completion of at least one course in computers. The educational goal of the herein proposed laboratory exercises is to enhance the students’ motivation and creative thinking by organizing a relaxed yet competitive learning environment. The authors have developed a device including LEDs and switches, which is connected to a computer. By using assembly language, commands can be issued to flash several LEDs and read the states of the switches. The effectiveness of this idea was confirmed by a statistical study.

  7. Combined Error Correction Techniques for Quantum Computing Architectures

    CERN Document Server

    Byrd, M S; Byrd, Mark S.; Lidar, Daniel A.

    2003-01-01

    Proposals for quantum computing devices are many and varied. They each have unique noise processes that make none of them fully reliable at this time. There are several error correction/avoidance techniques which are valuable for reducing or eliminating errors, but not one, alone, will serve as a panacea. One must therefore take advantage of the strength of each of these techniques so that we may extend the coherence times of the quantum systems and create more reliable computing devices. To this end we give a general strategy for using dynamical decoupling operations on encoded subspaces. These encodings may be of any form; of particular importance are decoherence-free subspaces and quantum error correction codes. We then give means for empirically determining an appropriate set of dynamical decoupling operations for a given experiment. Using these techniques, we then propose a comprehensive encoding solution to many of the problems of quantum computing proposals which use exchange-type interactions. This us...

  8. Advanced Simulation and Computing Business Plan

    Energy Technology Data Exchange (ETDEWEB)

    Rummel, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  9. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  10. Advanced Computing Tools and Models for Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  11. Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment

    Science.gov (United States)

    Duong, Tuan A.; Duong, Vu A.

    2012-01-01

    A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.

  12. Advances in Computing and Information Technology : Proceedings of the Second International Conference on Advances in Computing and Information Technology

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2013-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  13. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  14. A Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response

    Directory of Open Access Journals (Sweden)

    Enrique Gonzalez

    2017-01-01

    Full Text Available The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.

  15. Advances in Computer Science and Education

    CERN Document Server

    Huang, Xiong

    2012-01-01

    CSE2011 is an integrated conference concentration its focus on computer science and education. In the proceeding, you can learn much more knowledge about computer science and education of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful

  16. Advanced Computer Simulations of Military Incinerators

    Science.gov (United States)

    2004-12-01

    models contain 3D furnace and canister geometries and all of the relevant physics and chemistry. The destruction of chemical agent is predicted using...computational chemistry methods, chemical kinetics have been developed that describe the incineration of organo -phosphorus nerve agent (GB, VX) and...States. The chemical warfare agents (CWA) consist of mustard gas and other blister agents as well as organo -phosphorus nerve agents. Incineration was

  17. Advanced computational aeroelasticity and multidisciplinary application for composite curved wing

    OpenAIRE

    Kim, Dong-Hyun; Kim, Yu-Sung

    2008-01-01

    This article preferentially describes advanced computational aeroelasticity and its multidisciplinary applications based on the coupled CFD (Computational Fluid Dynamics) and CSD (Computational Structural Dynamics) method. A modal-based coupled nonlinear aeroelastic analysis system incorporated with unsteady Euler aerodynamics has been developed based on the high-speed parallel processing technique. It is clearly expected to give accurate and practical engineering data in the design fields of...

  18. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    Science.gov (United States)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  19. A spatially localized architecture for fast and modular DNA computing

    Science.gov (United States)

    Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg

    2017-09-01

    Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.

  20. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole.

  1. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  2. Advances in quantum control of three-level superconducting circuit architectures

    Energy Technology Data Exchange (ETDEWEB)

    Falci, G.; Paladino, E. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); CNR-IMM UOS Universita (MATIS), Consiglio Nazionale delle Ricerche, Catania (Italy); INFN, Sezione di Catania (Italy); Di Stefano, P.G. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); Centre for Theoretical Atomic, Molecular and Optical Physics, School of Mathematics and Physics, Queen' s University Belfast(United Kingdom); Ridolfo, A.; D' Arrigo, A. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); Paraoanu, G.S. [Low Temperature Laboratory, Department of Applied Physics, Aalto University School of Science (Finland)

    2017-06-15

    Advanced control in Lambda (Λ) scheme of a solid state architecture of artificial atoms and quantized modes would allow the translation to the solid-state realm of a whole class of phenomena from quantum optics, thus exploiting new physics emerging in larger integrated quantum networks and for stronger couplings. However control solid-state devices has constraints coming from selection rules, due to symmetries which on the other hand yield protection from decoherence, and from design issues, for instance that coupling to microwave cavities is not directly switchable. We present two new schemes for the Λ-STIRAP control problem with the constraint of one or two classical driving fields being always-on. We show how these protocols are converted to apply to circuit-QED architectures. We finally illustrate an application to coherent spectroscopy of the so called ultrastrong atom-cavity coupling regime. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Advanced coupled-micro-resonator architectures for dispersion and spectral engineering applications

    Science.gov (United States)

    Van, Vien

    2009-02-01

    We report recent progress in the design and fabrication of coupled optical micro-resonators and their applications in realizing compact OEIC devices for optical spectral engineering. By leveraging synthesis techniques for analog and digital electrical circuits, advanced coupled-microring device architectures can be realized with the complexity and functionality approaching that of state-of-the-art microwave filters. In addition, the traveling wave nature of microring resonators can be exploited to realize novel devices not possible with standing wave resonators. Applications of coupledmicro- resonator devices in realizing complex optical transfer functions for amplitude, phase and group delay engineering will be presented. Progress in the practical implementation of these devices in the Silicon-on-Insulator OEIC platform will be highlighted along with the challenges and potential for constructing very high order optical filters using coupledmicroring architectures.

  4. Computer Security Primer: Systems Architecture, Special Ontology and Cloud Virtual Machines

    Science.gov (United States)

    Waguespack, Leslie J.

    2014-01-01

    With the increasing proliferation of multitasking and Internet-connected devices, security has reemerged as a fundamental design concern in information systems. The shift of IS curricula toward a largely organizational perspective of security leaves little room for focus on its foundation in systems architecture, the computational underpinnings of…

  5. New multi-DSP parallel computing architecture for real-time image processing

    Institute of Scientific and Technical Information of China (English)

    Hu Junhong; Zhang Tianxu; Jiang Haoyang

    2006-01-01

    The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.

  6. A Framework for Evaluating Computer Architectures to Support Systems with Security Requirements, with Applications.

    Science.gov (United States)

    1987-11-05

    develops a set of criteria for evaluating computer architectures that are to support sy’stemns v% ith securit % requirements. Central to these criteria is the...M.. u Fu ’VMR Appendix B DEC VAX-11/780 OVERVIEW The VAX-I1/780 is a 32-bit computer with a virtual memory space of up to 4G -bytes IBI]. The

  7. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, James H. [University of North Florida; Cox, Philip [University of North Florida; Harrington, William J [University of North Florida; Campbell, Joseph L [University of North Florida

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  8. MAINS: MULTI-AGENT INTELLIGENT SERVICE ARCHITECTURE FOR CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    T. Joshva Devadas

    2014-04-01

    Full Text Available Computing has been transformed to a model having commoditized services. These services are modeled similar to the utility services water and electricity. The Internet has been stunningly successful over the course of past three decades in supporting multitude of distributed applications and a wide variety of network technologies. However, its popularity has become the biggest impediment to its further growth with the handheld devices mobile and laptops. Agents are intelligent software system that works on behalf of others. Agents are incorporated in many innovative applications in order to improve the performance of the system. Agent uses its possessed knowledge to react with the system and helps to improve the performance. Agents are introduced in the cloud computing is to minimize the response time when similar request is raised from an end user in the globe. In this paper, we have introduced a Multi Agent Intelligent system (MAINS prior to cloud service models and it was tested using sample dataset. Performance of the MAINS layer was analyzed in three aspects and the outcome of the analysis proves that MAINS Layer provides a flexible model to create cloud applications and deploying them in variety of applications.

  9. Building an Advanced Computing Environment with SAN Support

    Institute of Scientific and Technical Information of China (English)

    DajianYANG; MeiMA; 等

    2001-01-01

    The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can't meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.

  10. Novel photonic bandgap based architectures for quantum computers and networks

    Science.gov (United States)

    Guney, Durdu

    All of the approaches for quantum information processing have their own advantages, but unfortunately also their own drawbacks. Ideally, one would merge the most attractive features of those different approaches in a single technology. We envision that large-scale photonic crystal (PC) integrated circuits and fibers could be the basis for robust and compact quantum circuits and processors of the next generation quantum computers and networking devices. Cavity QED, solid-state, and (non)linear optical models for computing, and optical fiber approach for communications are the most promising candidates to be improved through this novel technology. In our work, we consider both digital and analog quantum computing. In the digital domain, we first perform gate-level analysis. To achieve this task, we solve the Jaynes-Cummings Hamiltonian with time-dependent coupling parameters under the dipole and rotating-wave approximations for a 3D PC single-mode cavity with a sufficiently high Q-factor. We then exploit the results to show how to create a maximally entangled state of two atoms and how to implement several quantum logic gates: a dual-rail Hadamard gate, a dual-rail NOT gate, and a SWAP gate. In all of these operations, we synchronize atoms, as opposed to previous studies with PCs. The method has the potential for extension to N-atom entanglement, universal quantum logic operations, and the implementation of other useful, cavity QED-based quantum information processing tasks. In the next part of the digital domain, we study circuit-level implementations. We design and simulate an integrated teleportation and readout circuit on a single PC chip. The readout part of our device can not only be used on its own but can also be integrated with other compatible optical circuits to achieve atomic state detection. Further improvement of the device in terms of compactness and robustness is possible by integrating with sources and detectors in the optical regime. In the analog

  11. Computer Algorithms and Architectures for Three-Dimensional Eddy-Current Nondestructive Evaluation. Volume 1. Executive Summary

    Science.gov (United States)

    1989-01-20

    LLAA6 .l iI -SA/TR-2/89 A003: FINAL REPORT * COMPUTER ALGORITHMS AND ARCHITECTURES N FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION...Ciasuication) COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDD~j~~JRRN iv ummary Q PERSONAL AUTriOR(S) SBAHASCAE 1 3a. TYPE Of REPORT

  12. Computational neuroscience for advancing artificial intelligence

    Directory of Open Access Journals (Sweden)

    Fernando P. Ponce

    2011-07-01

    Full Text Available resumen del libro de Alonso, E. y Mondragón, E. (2011. Hershey, NY: Medical Information Science Reference. La neurociencia como disciplinapersigue el entendimiento del cerebro y su relación con el funcionamiento de la mente a través del análisis de la comprensión de la interacción de diversos procesos físicos, químicos y biológicos (Bassett & Gazzaniga, 2011. Por otra parte, numerosas disciplinasprogresivamente han realizado significativas contribuciones en esta empresa tales como la matemática, la psicología o la filosofía, entre otras. Producto de este esfuerzo, es que junto con la neurociencia tradicional han aparecido disciplinas complementarias como la neurociencia cognitiva, la neuropsicología o la neurocienciacomputacional (Bengio, 2007; Dayan & Abbott, 2005. En el contexto de la neurociencia computacional como disciplina complementaria a laneurociencia tradicional. Alonso y Mondragón (2011 editan el libroComputacional Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications.

  13. Adaptive Kinetic-Fluid Solvers for Heterogeneous Computing Architectures

    CERN Document Server

    Zabelok, Sergey; Kolobov, Vladimir

    2015-01-01

    This paper describes recent progress towards porting a Unified Flow Solver (UFS) to heterogeneous parallel computing. UFS is an adaptive kinetic-fluid simulation tool, which combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. The main challenge of porting UFS to graphics processing units (GPUs) comes from the dynamically adapted mesh, which causes irregular data access. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) module, and the Lattice Boltzmann Method (LBM) solver, all using octree Cartesian mesh with AMR. Double digit speedups on single GPU and good scaling for multi-GPU have been demonstrated.

  14. Communication efficient basic linear algebra computations on hypercube architectures

    Energy Technology Data Exchange (ETDEWEB)

    Johnsson, S.L.

    1987-04-01

    This paper presents a few algorithms for embedding loops and multidimensional arrays in hypercubes with emphasis on proximity preserving embeddings. A proximity preserving embedding minimizes the need for communication bandwidth in computations requiring nearest neighbor communication. Two storage schemes for ''large'' problems on ''small'' machines are suggested and analyzed, and algorithms for matrix transpose, multiplying matrices, factoring matrices, and solving triangular linear systems are presented. A few complete binary tree embeddings are described and analyzed. The data movement in the matrix algorithms is analyzed and it is shown that in the majority of cases the directed routing paths intersect only at nodes of the hypercube allowing for a maximum degree of pipelining.

  15. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    Science.gov (United States)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  16. Performance prediction of finite-difference solvers for different computer architectures

    Science.gov (United States)

    Louboutin, Mathias; Lange, Michael; Herrmann, Felix J.; Kukreja, Navjot; Gorman, Gerard

    2017-08-01

    The life-cycle of a partial differential equation (PDE) solver is often characterized by three development phases: the development of a stable numerical discretization; development of a correct (verified) implementation; and the optimization of the implementation for different computer architectures. Often it is only after significant time and effort has been invested that the performance bottlenecks of a PDE solver are fully understood, and the precise details varies between different computer architectures. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop. We show how discretization of a wave-equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures. We focus on the roofline model, now broadly used in the high-performance computing community, which considers the achievable performance in terms of the peak memory bandwidth and peak floating point performance of a computer with respect to algorithmic choices. A first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented. With this information available at the time of algorithm design, the expected performance on target computer systems can be used as a driver for algorithm design.

  17. Advanced Trace Pattern For Computer Intrusion Discovery

    CERN Document Server

    Rahayu, S Siti; Shahrin, S; Zaki, M Mohd; Faizal, M A; Zaheera, Z A

    2010-01-01

    The number of crime committed based on the malware intrusion is never ending as the number of malware variants is growing tremendously and the usage of internet is expanding globally. Malicious codes easily obtained and use as one of weapon to gain their objective illegally. Hence, in this research, diverse logs from different OSI layer are explored to identify the traces left on the attacker and victim logs in order to establish worm trace pattern to defending against the attack and help revealing true attacker or victim. For the purpose of this paper, it focused on malware intrusion and traditional worm namely sasser worm variants. The concept of trace pattern is created by fusing the attacker's and victim's perspective. Therefore, the objective of this paper is to propose a general worm trace pattern for attacker's, victim's and multi-step (attacker/victim)'s by combining both perspectives. These three proposed worm trace patterns can be extended into research areas in alert correlation and computer forens...

  18. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, James H. [University of North Florida; Cox, Philip [University of North Florida; Harrington, William J [University of North Florida; Campbell, Joseph L [University of North Florida

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  19. Advances in FDTD computational electrodynamics photonics and nanotechnology

    CERN Document Server

    Oskooi, Ardavan; Johnson, Steven G

    2013-01-01

    Advances in photonics and nanotechnology have the potential to revolutionize humanity s ability to communicate and compute. To pursue these advances, it is mandatory to understand and properly model interactions of light with materials such as silicon and gold at the nanoscale, i.e., the span of a few tens of atoms laid side by side. These interactions are governed by the fundamental Maxwell s equations of classical electrodynamics, supplemented by quantum electrodynamics. This book presents the current state-of-the-art in formulating and implementing computational models of these interactions. Maxwell s equations are solved using the finite-difference time-domain (FDTD) technique, pioneered by the senior editor, whose prior Artech books in this area are among the top ten most-cited in the history of engineering. You discover the most important advances in all areas of FDTD and PSTD computational modeling of electromagnetic wave interactions. This cutting-edge resource helps you understand the latest develo...

  20. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    Science.gov (United States)

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  1. A cerebellar neuroprosthetic system: computational architecture and in vivo experiments

    Directory of Open Access Journals (Sweden)

    Ivan eHerreros Alonso

    2014-05-01

    Full Text Available Emulating the input-output functions performed by a brain structure opens the possibility for developing neuro-prosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model's inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuro-prosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step towards replacing lost functions of the central nervous system via neuro-prosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuro-prosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step towards the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term

  2. 9th International Conference on Advanced Computing & Communication Technologies

    CERN Document Server

    Mandal, Jyotsna; Auluck, Nitin; Nagarajaram, H

    2016-01-01

    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology.

  3. Service Oriented Architecture for Remote Sensing Satellite Telemetry Data Implemented on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Abdelfattah El-Sharkawi

    2013-06-01

    Full Text Available This paper articulates how Service Oriented Architecture (SOA and cloud computing together can facilitate technology setup in Telemetry (TM processing with a case study from the Egyptian space program (ESP and a comparative study with space situational awareness (SSA program in European space agency (ESA, Moreover, this paper illustrates how cloud computing services and deployment models enable software and hardware decoupling and making flexible TM data analysis possible. The large amount of available computational resources facilitates a shift in approaches to software development, deployment and operations.

  4. Simulation of electronic structure Hamiltonians in a superconducting quantum computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kaicher, Michael; Wilhelm, Frank K. [Theoretical Physics, Saarland University, 66123 Saarbruecken (Germany); Love, Peter J. [Department of Physics, Haverford College, Haverford, Pennsylvania 19041 (United States)

    2015-07-01

    Quantum chemistry has become one of the most promising applications within the field of quantum computation. Simulating the electronic structure Hamiltonian (ESH) in the Bravyi-Kitaev (BK)-Basis to compute the ground state energies of atoms/molecules reduces the number of qubit operations needed to simulate a single fermionic operation to O(log(n)) as compared to O(n) in the Jordan-Wigner-Transformation. In this work we will present the details of the BK-Transformation, show an example of implementation in a superconducting quantum computer architecture and compare it to the most recent quantum chemistry algorithms suggesting a constant overhead.

  5. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate, P

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  6. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  7. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, Donald P. [Cornell Univ., Ithaca, NY (United States); Hencey, Brandon M. [Cornell Univ., Ithaca, NY (United States)

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  8. A Multifaceted Approach to Modernizing NASA's Advanced Multi-Mission Operations System (AMMOS) System Architecture

    Science.gov (United States)

    Estefan, Jeff A.; Giovannoni, Brian J.

    2014-01-01

    The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost

  9. A COMPUTER APPLICATION FOR THE ARCHITECTURAL PROGRAM DEVELOPMENT IN DESIGN EDUCATION

    Directory of Open Access Journals (Sweden)

    Daniel de Carvalho Moreira

    2012-02-01

    Full Text Available The development of the architectural program in the design studio faces several difficulties. The purpose of the program is to describe the conditions where the building being designed will operate; this requires a lot of information and organization. Due to its complexity, the architetural program definition in the disciplines of design is often simplified. This article discusses such issue and proposes a computer application (SINFORMA that gathers information about the building and the theme of the project in order to develop the architectural program based on structures proposed by bibliographic references. The SINFORMA is composed by a framework which includes a data base and modules which analyze and organize functional requirements, according to the Problem Seeking method and the contemporary values of architecture enumerated by Hershberger. It is discussed how the application can be applied in design education and how it offers students a practical approach and a comprehensive data analysis for the design of built environment. Keywords: Architectural programming, Architectural design, Education.

  10. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  11. An $\\Theta(\\sqrt{n})$-depth Quantum Adder on a 2D NTC Quantum Computer Architecture

    CERN Document Server

    Choi, Byung-Soo

    2010-01-01

    In this work, we propose an adder for the 2D NTC architecture, designed to match the architectural constraints of many quantum computing technologies. The chosen architecture allows the layout of logical qubits in two dimensions and the concurrent execution of one- and two-qubit gates with nearest-neighbor interaction only. The proposed adder works in three phases. In the first phase, the first column generates the summation output and the other columns do the carry-lookahead operations. In the second phase, these intermediate values are propagated from column to column, preparing for computation of the final carry for each register position. In the last phase, each column, except the first one, generates the summation output using this column-level carry. The depth and the number of qubits of the proposed adder are $\\Theta(\\sqrt{n})$ and O(n), respectively. The proposed adder executes faster than the adders designed for the 1D NTC architecture when the length of the input registers $n$ is larger than 58.

  12. Reststrahlen Band Optics for the Advancement of Far-Infrared Optical Architecture

    Science.gov (United States)

    Streyer, William Henderson

    The dissertation aims to build a case for the benefits and means of investigating novel optical materials and devices operating in the underdeveloped far-infrared (20 - 60 microns) region of the electromagnetic spectrum. This dissertation and the proposed future investigations described here have the potential to further the advancement of new and enhanced capabilities in fields such as astronomy, medicine, and the petrochemical industry. The first several completed projects demonstrate techniques for developing far-infrared emission sources using selective thermal emitters, which could operate more efficiently than their simple blackbody counterparts commonly used as sources in this wavelength region. The later projects probe the possible means of linking bulk optical phonon populations through interaction with surface modes to free space photons. This is a breakthrough that would enable the development of a new class of light sources operating in the far-infrared. Chapter 1 introduces the far-infrared wavelength range along with many of its current and potential applications. The limited capabilities of the available optical architecture in this range are outlined along with a discussion of the state-of-the-art technology available in this range. Some of the basic physical concepts routinely applied in this dissertation are reviewed; namely, the Drude formalism, semiconductor Reststrahlen bands, and surface polaritons. Lastly, some of the physical challenges that impede the further advancement of far-infrared technology, despite remarkable recent success in adjacent regions of the electromagnetic spectrum, are discussed. Chapter 2 describes the experimental and computational methods employed in this dissertation. Spectroscopic techniques used to investigate both the mid-infrared and far-infrared wavelength ranges are reviewed, including a brief description of the primary instrument of infrared spectroscopy, the Fourier Transform Infrared (FTIR) spectrometer

  13. Advances in computers dependable and secure systems engineering

    CERN Document Server

    Hurson, Ali

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  14. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  15. Using Advanced Computer Vision Algorithms on Small Mobile Robots

    Science.gov (United States)

    2006-04-20

    this approach is the implementation of advanced computer vision algorithms on small mobile robots . We demonstrate the implementation and testing of the...following two algorithms useful on mobile robots : (1) object classification using a boosted Cascade of classifiers trained with the Adaboost training

  16. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    Science.gov (United States)

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  17. Special issue on advances in computer entertainment: editorial

    NARCIS (Netherlands)

    Romão, Teresa; Nijholt, Anton; Cheok, Adrian David; Romão, T.; Nijholt, A.; Cheok, J.D.

    2015-01-01

    This special issue of the International Journal of Arts and Technology comprises a selection of papers from ACE 2012, the 9th International Conference on Advances in Computer Entertainment (Nijholt et al., 2012). ACE is the leading scientific forum for dissemination of cutting-edge research results

  18. Advances in computational design and analysis of airbreathing propulsion systems

    Science.gov (United States)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  19. Advanced computational tools for 3-D seismic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Glover, C.W.; Protopopescu, V.A. [Oak Ridge National Lab., TN (United States)] [and others

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  20. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  1. Service-Oriented Architecture for NVO and TeraGrid Computing

    Science.gov (United States)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  2. Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Kaushik; Murphy, Mark; Volkov, Vasily; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Patterson, David; Shalf, John; Yelick, Katherine

    2008-08-22

    Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations -- a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural trade-offs of emerging multicore designs and their implications on scientific algorithm development.

  3. 2014 National Workshop on Advances in Communication and Computing

    CERN Document Server

    Prasanna, S; Sarma, Kandarpa; Saikia, Navajit

    2015-01-01

    The present volume is a compilation of research work in computation, communication, vision sciences, device design, fabrication, upcoming materials and related process design, etc. It is derived out of selected manuscripts submitted to the 2014 National Workshop on Advances in Communication and Computing (WACC 2014), Assam Engineering College, Guwahati, Assam, India which is emerging out to be a premier platform for discussion and dissemination of knowhow in this part of the world. The papers included in the volume are indicative of the recent thrust in computation, communications and emerging technologies. Certain recent advances in ZnO nanostructures for alternate energy generation provide emerging insights into an area that has promises for the energy sector including conservation and green technology. Similarly, scholarly contributions have focused on malware detection and related issues. Several contributions have focused on biomedical aspects including contributions related to cancer detection using act...

  4. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition is simpler and requires less computational resources compared to other inputs such as facial expressions. The Berlin database of Emotional...Network for Real-Time Speech-Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q...domains such as image and video analysis, anomaly detection, and speech recognition . In this research, a hardware architecture was explored for

  5. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    Science.gov (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  6. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  7. A Conceptual Architecture for Adaptive Human-Computer Interface of a PT Operation Platform Based on Context-Awareness

    Directory of Open Access Journals (Sweden)

    Qing Xue

    2014-01-01

    Full Text Available We present a conceptual architecture for adaptive human-computer interface of a PT operation platform based on context-awareness. This architecture will form the basis of design for such an interface. This paper describes components, key technologies, and working principles of the architecture. The critical contents covered context information modeling, processing, relationship establishing between contexts and interface design knowledge by use of adaptive knowledge reasoning, and visualization implementing of adaptive interface with the aid of interface tools technology.

  8. A High Performance Computer Architecture for Embedded And/Or Multi-Computer Applications

    Science.gov (United States)

    1990-09-01

    Intermetrics VHDL-1076 toolset and executes WM microcode instructions stored as bit patterns in a Unix text file. Similarly, the state of the model’s...memory is written to another Unix file at the end of an execution run. The model is implementation independent in that the WM’s architecture is...desired module utilization information for a particular implementation technology. 3. Compiler/Simulator/Assembler The WM compiler translates Kernighan

  9. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  10. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  11. JINR CICC in computational chemistry and nanotechnology problems: DL_POLY performance for different communication architectures

    Science.gov (United States)

    Dushanov, E.; Kholmurodov, Kh.; Aru, G.; Korenkov, V.; Smith, W.; Ohno, Y.; Narumi, T.; Morimoto, G.; Taiji, M.; Yasuoka, K.

    2009-05-01

    This report compares the performance of the DL_POLY general-purpose molecular dynamics simulation package on the LIT JINR computing cluster CICC with various communication systems. The comparison involved two cluster architectures: Gigabit Ethernet and InfiniBand technologies, respectively. The code performance tests include some comparison of the CICC cluster with the special-purpose computer MDGRAPE-3 developed at RIKEN for a high-speed acceleration of the MD (molecular dynamics) without a fixed cutoff. The DL_POLY benchmark covers a set of typical MD system simulations detailed below.

  12. Advanced sensor-computer technology for urban runoff monitoring

    Science.gov (United States)

    Yu, Byunggu; Behera, Pradeep K.; Ramirez Rochac, Juan F.

    2011-04-01

    The paper presents the project team's advanced sensor-computer sphere technology for real-time and continuous monitoring of wastewater runoff at the sewer discharge outfalls along the receiving water. This research significantly enhances and extends the previously proposed novel sensor-computer technology. This advanced technology offers new computation models for an innovative use of the sensor-computer sphere comprising accelerometer, programmable in-situ computer, solar power, and wireless communication for real-time and online monitoring of runoff quantity. This innovation can enable more effective planning and decision-making in civil infrastructure, natural environment protection, and water pollution related emergencies. The paper presents the following: (i) the sensor-computer sphere technology; (ii) a significant enhancement to the previously proposed discrete runoff quantity model of this technology; (iii) a new continuous runoff quantity model. Our comparative study on the two distinct models is presented. Based on this study, the paper further investigates the following: (1) energy-, memory-, and communication-efficient use of the technology for runoff monitoring; (2) possible sensor extensions for runoff quality monitoring.

  13. Molecular architectures based on pi-conjugated block copolymers for global quantum computation

    Energy Technology Data Exchange (ETDEWEB)

    Mujica Martinez, C A; Arce, J C [Universidad del Valle, Departamento de QuImica, A. A. 25360, Cali (Colombia); Reina, J H [Universidad del Valle, Departamento de Fisica, A. A. 25360, Cali (Colombia); Thorwart, M, E-mail: camujica@univalle.edu.c, E-mail: j.reina-estupinan@physics.ox.ac.u, E-mail: jularce@univalle.edu.c [Institut fuer Theoretische Physik IV, Heinrich-Heine-Universitaet Duesseldorf, 40225 Duesseldorf (Germany)

    2009-05-01

    We propose a molecular setup for the physical implementation of a barrier global quantum computation scheme based on the electron-doped pi-conjugated copolymer architecture of nine blocks PPP-PDA-PPP-PA-(CCH-acene)-PA-PPP-PDA-PPP (where each block is an oligomer). The physical carriers of information are electrons coupled through the Coulomb interaction, and the building block of the computing architecture is composed by three adjacent qubit systems in a quasi-linear arrangement, each of them allowing qubit storage, but with the central qubit exhibiting a third accessible state of electronic energy far away from that of the qubits' transition energy. The third state is reached from one of the computational states by means of an on-resonance coherent laser field, and acts as a barrier mechanism for the direct control of qubit entanglement. Initial estimations of the spontaneous emission decay rates associated to the energy level structure allow us to compute a damping rate of order 10{sup -7} s, which suggest a not so strong coupling to the environment. Our results offer an all-optical, scalable, proposal for global quantum computing based on semiconducting pi-conjugated polymers.

  14. Advanced payload concepts and system architecture for emerging services in Indian National Satellite Systems

    Science.gov (United States)

    Balasubramanian, E. P.; Rao, N. Prahlad; Sarkar, S.; Singh, D. K.

    2008-07-01

    Over the past two decades Indian Space Research Organization (ISRO) has developed and operationalized satellites to generate a large capacity of transponders for telecommunication service use in INSAT system. More powerful on-board transmitters are built to usher-in direct-to-home broadcast services. These have transformed the Satcom application scenario in the country. With the proliferation of satellite technology, a shift in the Indian market is witnessed today in terms of demand for new services like Broadband Internet, Interactive Multimedia, etc. While it is imperative to pay attention to market trends, ISRO is also committed towards taking the benefits of technological advancement to all round growth of our population, 70% of which dwell in rural areas. The initiatives already taken in space application related to telemedicine, tele-education and Village Resource Centres are required to be taken to a greater height of efficiency. These targets pose technological challenges to build a large capacity and cost-effective satellite system. This paper addresses advanced payload concepts and system architecture along with the trade-off analysis on design parameters in proposing a new generation satellite system capable of extending the reach of the Indian broadband structure to individual users, educational and medical institutions and enterprises for interactive services. This will be a strategic step in the evolution of INSAT system to employ advanced technology to touch every human face of our population.

  15. Recent advances in metal oxide-based electrode architecture design for electrochemical energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Jian; Liu, Jinping; Huang, Xintang [Institute of Nanoscience and Nanotechnology, Department of Physics, Central China Normal University, Wuhan, Hubei (China); Li, Yuanyuan [School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan (China); Yuan, Changzhou; Lou, Xiong Wen [School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore (China)

    2012-10-02

    Metal oxide nanostructures are promising electrode materials for lithium-ion batteries and supercapacitors because of their high specific capacity/capacitance, typically 2-3 times higher than that of the carbon/graphite-based materials. However, their cycling stability and rate performance still can not meet the requirements of practical applications. It is therefore urgent to improve their overall device performance, which depends on not only the development of advanced electrode materials but also in a large part ''how to design superior electrode architectures''. In the article, we will review recent advances in strategies for advanced metal oxide-based hybrid nanostructure design, with the focus on the binder-free film/array electrodes. These binder-free electrodes, with the integration of unique merits of each component, can provide larger electrochemically active surface area, faster electron transport and superior ion diffusion, thus leading to substantially improved cycling and rate performance. Several recently emerged concepts of using ordered nanostructure arrays, synergetic core-shell structures, nanostructured current collectors, and flexible paper/textile electrodes will be highlighted, pointing out advantages and challenges where appropriate. Some future electrode design trends and directions are also discussed. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Recent advances in metal oxide-based electrode architecture design for electrochemical energy storage.

    Science.gov (United States)

    Jiang, Jian; Li, Yuanyuan; Liu, Jinping; Huang, Xintang; Yuan, Changzhou; Lou, Xiong Wen David

    2012-10-02

    Metal oxide nanostructures are promising electrode materials for lithium-ion batteries and supercapacitors because of their high specific capacity/capacitance, typically 2-3 times higher than that of the carbon/graphite-based materials. However, their cycling stability and rate performance still can not meet the requirements of practical applications. It is therefore urgent to improve their overall device performance, which depends on not only the development of advanced electrode materials but also in a large part "how to design superior electrode architectures". In the article, we will review recent advances in strategies for advanced metal oxide-based hybrid nanostructure design, with the focus on the binder-free film/array electrodes. These binder-free electrodes, with the integration of unique merits of each component, can provide larger electrochemically active surface area, faster electron transport and superior ion diffusion, thus leading to substantially improved cycling and rate performance. Several recently emerged concepts of using ordered nanostructure arrays, synergetic core-shell structures, nanostructured current collectors, and flexible paper/textile electrodes will be highlighted, pointing out advantages and challenges where appropriate. Some future electrode design trends and directions are also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    Science.gov (United States)

    Klonoff, David C

    2017-07-01

    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  18. A Newer User Authentication, File encryption and Distributed Server Based Cloud Computing security architecture

    Directory of Open Access Journals (Sweden)

    Kawser Wazed Nafi

    2012-10-01

    Full Text Available The cloud computing platform gives people the opportunity for sharing resources, services and information among the people of the whole world. In private cloud system, information is shared among the persons who are in that cloud. For this, security or personal information hiding process hampers. In this paper we have proposed new security architecture for cloud computing platform. This ensures secure communication system and hiding information from others. AES based file encryption system and asynchronous key system for exchanging information or data is included in this model. This structure can be easily applied with main cloud computing features, e.g. PaaS, SaaS and IaaS. This model also includes onetime password system for user authentication process. Our work mainly deals with the security system of the whole cloud computing platform.

  19. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    Science.gov (United States)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  20. Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013

    Energy Technology Data Exchange (ETDEWEB)

    Howell, Louis H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, Brian T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-09

    Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.

  1. Recent advances and challenges of fuel cell based power system architectures and control – A review

    DEFF Research Database (Denmark)

    Das, Vipin; Sanjeevikumar, Padmanaban; Venkitusamy, Karthikeyan

    2017-01-01

    Renewable energy generation is rapidly growing in the power sector industry and widely used for two categories: grid connected and standalone system. This paper gives the insights about fuel cell operation and application of various power electronics systems. The fuel cell voltage decreases bit...... of utilization. In order to improve the reliability of fuel cell based power system, the integration of energy storage system and advanced research methods are focused in this paper. The control algorithms of power architecture for the couple of well-known applications are discussed. Additionally, the paper...... addresses the suitable processor utilized as a part of the energy unit application on the premise of fuel cell characteristics. In this paper, the challenges to improve the dynamics of controller in fuel cell based applications are mentioned....

  2. A system architecture for an advanced Canadian wideband mobile satellite system

    Science.gov (United States)

    Takats, P.; Keelty, M.; Moody, H.

    In this paper, the system architecture for an advanced Canadian ka-band geostationary mobile satellite system is described, utilizing hopping spot beams to support a 256 kbps wideband service for both N-ISDN and packet-switched interconnectivity to small briefcase-size portable and mobile terminals. An assessment is given of the technical feasibility of the satellite payload and terminal design in the post year 2000 timeframe. The satellite payload includes regeneration and on-board switching to permit single hop interconnectivity between mobile terminals. The mobile terminal requires antenna tracking and platform stabilization to ensure acquisition of the satellite signal. The potential user applications targeted for this wideband service includes: home-office, multimedia, desk-top (PC) videoconferencing, digital audio broadcasting, single and multi-user personal communications.

  3. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Science.gov (United States)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  4. Molecular computing towards a novel computing architecture for complex problem solving

    CERN Document Server

    Chang, Weng-Long

    2014-01-01

    This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...

  5. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  6. MIT Laboratory for Computer Science Progress Report 26. Final technical report, July 1988-June 1989

    Energy Technology Data Exchange (ETDEWEB)

    Dertouzos, M.L.

    1989-06-01

    Contents: advanced network architecture; clinical decision making; computer architecture group; computation structures; information mechanics; mercury; parallel processing; programming methodology; programming systems research; spoken language systems; systematic program development; theory of computation; theory of distributed systems.

  7. Activities of the Research Institute for Advanced Computer Science

    Science.gov (United States)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  8. How computer science can help in understanding the 3D genome architecture.

    Science.gov (United States)

    Shavit, Yoli; Merelli, Ivan; Milanesi, Luciano; Lio', Pietro

    2016-09-01

    Chromosome conformation capture techniques are producing a huge amount of data about the architecture of our genome. These data can provide us with a better understanding of the events that induce critical regulations of the cellular function from small changes in the three-dimensional genome architecture. Generating a unified view of spatial, temporal, genetic and epigenetic properties poses various challenges of data analysis, visualization, integration and mining, as well as of high performance computing and big data management. Here, we describe the critical issues of this new branch of bioinformatics, oriented at the comprehension of the three-dimensional genome architecture, which we call 'Nucleome Bioinformatics', looking beyond the currently available tools and methods, and highlight yet unaddressed challenges and the potential approaches that could be applied for tackling them. Our review provides a map for researchers interested in using computer science for studying 'Nucleome Bioinformatics', to achieve a better understanding of the biological processes that occur inside the nucleus. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  10. Soft computing in design and manufacturing of advanced materials

    Science.gov (United States)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  11. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  12. Performance Evaluation of Load Balancing in Hierarchical Architecture for Grid Computing Service Middleware

    Directory of Open Access Journals (Sweden)

    Abderezak Touzene

    2011-03-01

    Full Text Available In this paper, we propose a hierarchical architecture for grid computing service that allows grid users with limited resources to do any kind of computation using grid shared hardware and/or software resources. The term limited resources includes disk or diskless workstations, Palmtops or any mobile devices. The proposed grid computing service takes into account both hardware and software requirements of the user computing task. Our grid system needs to maximize the overall system throughput, minimize the user response time, and allows a good grid resources utilization. On this aspect, we propose an adaptive task allocation and load balancing algorithm to achieve the desired goals. We have developed a simulation model using network simulator NS2 to evaluate the performance of our grid system. We have also conducted some experiments on our test-bed prototype. The performance evaluation measures confirm the good quality (grid saturation level close to 90% of the grid load of our proposed architecture and load balancing algorithm.

  13. Advances in Cross-Cutting Ideas for Computational Climate Science

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Esmond [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Evans, Katherine J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Caldwell, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hoffman, Forrest M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jackson, Charles [Univ. of Texas, Austin, TX (United States); Kerstin, Van Dam [Brookhaven National Lab. (BNL), Upton, NY (United States); Leung, Ruby [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Martin, Daniel F. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ostrouchov, George [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tuminaro, Raymond [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ullrich, Paul [Univ. of California, Davis, CA (United States); Wild, S. [Argonne National Lab. (ANL), Argonne, IL (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-01-01

    This report presents results from the DOE-sponsored workshop titled, ``Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1) process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for

  14. Advances in Cross-Cutting Ideas for Computational Climate Science

    Energy Technology Data Exchange (ETDEWEB)

    Ng, E.; Evans, K.; Caldwell, P.; Hoffman, F.; Jackson, C.; Van Dam, K.; Leung, R.; Martin, D.; Ostrouchov, G.; Tuminaro, R.; Ullrich, P.; Wild, S.; Williams, S.

    2017-01-01

    This report presents results from the DOE-sponsored workshop titled, Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1) process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling

  15. Developing a New Framework for Integration and Teaching of Computer Aided Architectural Design (CAAD) in Nigerian Schools of Architecture

    Science.gov (United States)

    Uwakonye, Obioha; Alagbe, Oluwole; Oluwatayo, Adedapo; Alagbe, Taiye; Alalade, Gbenga

    2015-01-01

    As a result of globalization of digital technology, intellectual discourse on what constitutes the basic body of architectural knowledge to be imparted to future professionals has been on the increase. This digital revolution has brought to the fore the need to review the already overloaded architectural education curriculum of Nigerian schools of…

  16. Bio-signal analysis system design with support vector machines based on cloud computing service architecture.

    Science.gov (United States)

    Shen, Chia-Ping; Chen, Wei-Hsin; Chen, Jia-Ming; Hsu, Kai-Ping; Lin, Jeng-Wei; Chiu, Ming-Jang; Chen, Chi-Huang; Lai, Feipei

    2010-01-01

    Today, many bio-signals such as Electroencephalography (EEG) are recorded in digital format. It is an emerging research area of analyzing these digital bio-signals to extract useful health information in biomedical engineering. In this paper, a bio-signal analyzing cloud computing architecture, called BACCA, is proposed. The system has been designed with the purpose of seamless integration into the National Taiwan University Health Information System. Based on the concept of. NET Service Oriented Architecture, the system integrates heterogeneous platforms, protocols, as well as applications. In this system, we add modern analytic functions such as approximated entropy and adaptive support vector machine (SVM). It is shown that the overall accuracy of EEG bio-signal analysis has increased to nearly 98% for different data sets, including open-source and clinical data sets.

  17. An FPGA-Based Quantum Computing Emulation Framework Based on Serial-Parallel Architecture

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2016-01-01

    Full Text Available Hardware emulation of quantum systems can mimic more efficiently the parallel behaviour of quantum computations, thus allowing higher processing speed-up than software simulations. In this paper, an efficient hardware emulation method that employs a serial-parallel hardware architecture targeted for field programmable gate array (FPGA is proposed. Quantum Fourier transform and Grover’s search are chosen as case studies in this work since they are the core of many useful quantum algorithms. Experimental work shows that, with the proposed emulation architecture, a linear reduction in resource utilization is attained against the pipeline implementations proposed in prior works. The proposed work contributes to the formulation of a proof-of-concept baseline FPGA emulation framework with optimization on datapath designs that can be extended to emulate practical large-scale quantum circuits.

  18. Internet-based virtual computing environment (iVCE): Concepts and architecture

    Institute of Scientific and Technical Information of China (English)

    LU Xicheng; WANG Huaimin; WANG Ji

    2006-01-01

    Resources over Internet have such intrinsic characteristics as growth, autonomy and diversity, which have brought many challenges to the efficient sharing and comprehensive utilization of these resources. This paper presents a novel approach for the construction of the Internet-based Virtual Computing Environment (iVCE), whose significant mechanisms are on-demand aggregation and autonomic collaboration. The iVCE is built on the open infrastructure of the Internet and provides harmonious, transparent and integrated services for end-users and applications. The concept of iVCE is presented and its architectural framework is described by introducing three core concepts, i.e., autonomic element, virtual commonwealth and virtual executor. Then the connotations, functions and related key technologies of each components of the architecture are deeply analyzed with a case study, iVCE for Memory.

  19. Designing and Operating Through Compromise: Architectural Analysis of CKMS for the Advanced Metering Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Duren, Mike [Sypris Electronics, LLC; Aldridge, Hal [ORNL; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL

    2013-01-01

    Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution and management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.

  20. Advanced Laser Architecture for Two-Step Laser Tandem Mass Spectrometer

    Science.gov (United States)

    Fahey, Molly E.; Li, Steven X.; Yu, Anthony W.; Getty, Stephanie A.

    2016-01-01

    Future astrobiology missions will focus on planets with significant astrochemical or potential astrobiological features, such as small, primitive bodies and the icy moons of the outer planets that may host diverse organic compounds. These missions require advanced instrument techniques to fully and unambiguously characterize the composition of surface and dust materials. Laser desorptionionization mass spectrometry (LDMS) is an emerging instrument technology for in situ mass analysis of non-volatile sample composition. A recent Goddard LDMS advancement is the two-step laser tandem mass spectrometer (L2MS) instrument to address the need for future flight instrumentation to deconvolve complex organic signatures. The L2MS prototype uses a resonance enhanced multi-photon laser ionization mechanism to selectively detect aromatic species from a more complex sample. By neglecting the aliphatic and inorganic mineral signatures in the two-step mass spectrum, the L2MS approach can provide both mass assignments and clues to structural information for an in situ investigation of non-volatile sample composition. In this paper we will describe our development effort on a new laser architecture that is based on the previously flown Lunar Orbiter Laser Altimeter (LOLA) laser transmitter for the L2MS instrument. The laser provides two discrete midinfrared wavelengths (2.8 m and 3.4 m) using monolithic optical parametric oscillators and ultraviolet (UV) wavelength (266 nm) on a single laser bench with a straightforward development path toward flight readiness.

  1. Mobile computation offloading architecture for mobile augmented reality, case study: Visualization of cetacean skeleton

    Directory of Open Access Journals (Sweden)

    Belen G. Rodriguez-Santana

    2016-01-01

    Full Text Available Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on devices with limited resources such as smart phones or tablets. One solution to this problem is to use techniques to Mobile Computation Offloading. This work proposes a mobile computation offloading architecture for mobile augmented reality. This solution would allow users to interact with a whale skeleton through an augmented reality application on mobile devices. Finally testing to assess the optimization of the resources of the mobile device when performing heavy render tests were made.

  2. A coarse-grained reconfigurable computing architecture with loop self-pipelining

    Institute of Scientific and Technical Information of China (English)

    DOU Yong; WU GuiMing; XU JinHui; ZHOU XingMing

    2009-01-01

    Reconfigurable computing tries to achieve the balance between high efficiency of custom computing and flexibility of general-purpose computing. This paper presents the Implementation techniques in LEAP, a coarse-grained reconfigurable array, and proposes a speculative execution mechanism for dynamic loop scheduling with the goal of one iteration per cycle and Implementation techniques to support decoupling synchronization between the token generator and the collector. This paper also in-troduces the techniques of exploiting both data dependences of intra- and inter-Iteration, with the help of two instructions for special data reuses in the loop-carried dependences. The experimental results show that the number of memory accesses reaches on average 3% of an RISC processor simulator with no memory optimization. In a practical Image matching application, LEAP architecture achieves about 34 times of speedup in execution cycles, compared with general-purpose processors.

  3. Apparatuses and Methods for Producing Runtime Architectures of Computer Program Modules

    Science.gov (United States)

    Abi-Antoun, Marwan Elia (Inventor); Aldrich, Jonathan Erik (Inventor)

    2013-01-01

    Apparatuses and methods for producing run-time architectures of computer program modules. One embodiment includes creating an abstract graph from the computer program module and from containment information corresponding to the computer program module, wherein the abstract graph has nodes including types and objects, and wherein the abstract graph relates an object to a type, and wherein for a specific object the abstract graph relates the specific object to a type containing the specific object; and creating a runtime graph from the abstract graph, wherein the runtime graph is a representation of the true runtime object graph, wherein the runtime graph represents containment information such that, for a specific object, the runtime graph relates the specific object to another object that contains the specific object.

  4. Enhance the Performance of Virtual Machines by Using Cluster Computing Architecture

    Directory of Open Access Journals (Sweden)

    Chia-Ying Tseng

    2013-05-01

    Full Text Available Virtualization is a very important technology in the IaaS of the cloud computing. User uses computing resource as a virtual machine (VM provided from the system provider. The VM's performance is depended on physical machine. A VM should be deployed all required resources when it is created. If there is no more resource could be deployed, the VM should be move to another physical machine for getting higher performance by using VM's live migration. The overhead of a VM's live migration is 30 to 90 seconds. If there are many virtual machines which need live migration, the cost of overhead will be very much. This paper presents how to use cluster computing architecture to improve the VM's performance. It will enhance 15% of per-formance compared with VM's live migration.  

  5. The advanced computational testing and simulation toolkit (ACTS)

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  6. A Unified Computational Architecture for Preprocessing Visual Information in Space and Time.

    Science.gov (United States)

    Skrzypek, Josef

    1986-06-01

    The success of autonomous mobile robots depends on the ability to understand continuously changing scenery. Present techniques for analysis of images are not always suitable because in sequential paradigm, computation of visual functions based on absolute values of stimuli is inefficient. Important aspects of visual information are encoded in discontinuities of intensity, hence a representation in terms of relative values seems advantageous. We present the computing architecture of a massively parallel vision module which optimizes the detection of relative intensity changes in space and time. Visual information must remain constant despite variation in ambient light level or velocity of target and robot. Constancy can be achieved by normalizing motion and lightness scales. In both cases, basic computation involves a comparison of the center pixels with the context of surrounding values. Therefore, a similar computing architecture, composed of three functionally-different and hierarchically-arranged layers of overlapping operators, can be used for two integrated parts of the module. The first part maintains high sensitivity to spatial changes by reducing noise and normalizing the lightness scale. The result is used by the second part to maintain high sensitivity to temporal discontinuities and to compute relative motion information. Simulation results show that response of the module is proportional to contrast of the stimulus and remains constant over the whole domain of intensity. It is also proportional to velocity of motion limited to any small portion of the visual field. Uniform motion throughout the visual field results in constant response, independent of velocity. Spatial and temporal intensity changes are enhanced because computationally, the module resembles the behavior of a DOG function.

  7. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  8. Prospects for the Application of Nanotechnologies to the Computer System Architecture

    Directory of Open Access Journals (Sweden)

    M. Mazur

    2012-03-01

    Full Text Available Computer system architecture essentially influences the comfort of our everyday living. Developmental transition from electromechanical relays to vacuum tubes, from transistors to integrated circuits has significantly changed technological standards for the architecture of computer systems. Contemporary information technologies offer huge potential concerning miniaturization of electronic circuits. Presently, a modern integrated circuit includes over a billion of transistors, each of them smaller than 100 nm . Stepping beyond the symbolic 100 nm limit means that with the onset of the 21 century we have entered a new scientific area that is an era of nanotechnologies. Along with the reduction of transistor dimensions their operation speed and efficiency grow. However, the hitherto observed developmental path of classical electronics with its focus on the miniaturization of transistors and memory cells seems arriving at the limits of technological possibilities because of technical problems as well as physical limitations related to the appearance of new nano-scale phenomena as e.g. quantum effects. Computer system architecture essentially influences the comfort of our everyday living. Developmental transition from electromechanical relays to vacuum tubes, from transistors to integrated circuits has significantly changed technological standards for the architecture of computer systems. Contemporary information technologies offer huge potential concerning miniaturization of electronic circuits. Presently, a modern integrated circuit includes over a billion of transistors, each of them smaller than 100 nm . Stepping beyond the symbolic 100 nm limit means that with the onset of the 21 century we have entered a new scientific area that is an era of nanotechnologies. Along with the reduction of transistor dimensions their operation speed and efficiency grow. However, the hitherto observed developmental path of classical electronics with its focus on

  9. Architecture for an advanced biomedical collaboration domain for the European paediatric cancer research community (ABCD-4-E).

    Science.gov (United States)

    Nitzlnader, Michael; Falgenhauer, Markus; Gossy, Christian; Schreier, Günter

    2015-01-01

    Today, progress in biomedical research often depends on large, interdisciplinary research projects and tailored information and communication technology (ICT) support. In the context of the European Network for Cancer Research in Children and Adolescents (ENCCA) project the exchange of data between data source (Source Domain) and data consumer (Consumer Domain) systems in a distributed computing environment needs to be facilitated. This work presents the requirements and the corresponding solution architecture of the Advanced Biomedical Collaboration Domain for Europe (ABCD-4-E). The proposed concept utilises public as well as private cloud systems, the Integrating the Healthcare Enterprise (IHE) framework and web-based applications to provide the core capabilities in accordance with privacy and security needs. The utility of crucial parts of the concept was evaluated by prototypic implementation. A discussion of the design indicates that the requirements of ENCCA are fully met. A whole system demonstration is currently being prepared to verify that ABCD-4-E has the potential to evolve into a domain-bridging collaboration platform in the future.

  10. Design and Implementation of Virtual Experiments for Computer Architecture Based on Simulators

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chen-xi; LIU Yi; LI Jiang-feng

    2012-01-01

    In china, many students are unable to do experiments in computer architecture courses, which is very important in helping them to understand many key points. The reason is that the cost of the hardware required is too much. Besides, it is very difficult to do research study in hardware experiments. In our course, we adopted an alternative way to deal with the problem: to use software simulators, and designed a set of virtual experiments based on these simulators, which are described in detail in this paper.

  11. Customized Architecture for Complex Routing Analysis: Case Study for the Convey Hybrid-Core Computer

    Science.gov (United States)

    2014-02-18

    circuits  that  can   be  reconfigured  using  a  hardware  description  language  such  as   Verilog .  Current   state...a   Verilog -­‐based  design  environment,  is  used  to  implement  a  custom-­‐ designed  computer  architecture,  or

  12. Aspects of operating systems and software engineering with parallel computer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Foessmeier, R.; Ruede, U.; Zenger, C.

    1988-05-01

    Making efficient use of parallel computer architectures generally requires special programming techniques. Usually, non-standardized parallel constructs are added to a traditional programming language. This reduces program portability and adds extra difficulties to programming. Coarse-grain parallelism can be exploited by parallel processes. In this field the operating system UNIX - now in widespread use - offers easy-to-use means for describing parallelism, sufficient for basic process synchronisation and communication. Problem structurization required for this kind of parallelism often contributes to the versatility and clarity of the programs. As an example, the elimination of a linear system is parallelized.

  13. Computation of the tip vortex flowfield for advanced aircraft propellers

    Science.gov (United States)

    Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph

    1988-01-01

    The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

  14. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  15. Computational study of Wolff's law with trabecular architecture in the human proximal femur using topology optimization.

    Science.gov (United States)

    Jang, In Gwun; Kim, Il Yong

    2008-08-07

    In the field of bone adaptation, it is believed that the morphology of bone is affected by its mechanical loads, and bone has self-optimizing capability; this phenomenon is well known as Wolff's law of the transformation of bone. In this paper, we simulated trabecular bone adaptation in the human proximal femur using topology optimization and quantitatively investigated the validity of Wolff's law. Topology optimization iteratively distributes material in a design domain producing optimal layout or configuration, and it has been widely and successfully used in many engineering fields. We used a two-dimensional micro-FE model with 50 microm pixel resolution to represent the full trabecular architecture in the proximal femur, and performed topology optimization to study the trabecular morphological changes under three loading cases in daily activities. The simulation results were compared to the actual trabecular architecture in previous experimental studies. We discovered that there are strong similarities in trabecular patterns between the computational results and observed data in the literature. The results showed that the strain energy distribution of the trabecular architecture became more uniform during the optimization; from the viewpoint of structural topology optimization, this bone morphology may be considered as an optimal structure. We also showed that the non-orthogonal intersections were constructed to support daily activity loadings in the sense of optimization, as opposed to Wolff's drawing.

  16. The Activity-Based Computing Project - A Software Architecture for Pervasive Computing Final Report

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    This report describes the results of the Activity-Based Computing (ABC) project granted by the Danish Strategic Re- search Council, grant no. #2106-04-0019. In summary, we conclude that the ABC project has been highly successful. Not only has it meet all of its objectives and expected results......, but have been able to pull additional resources and move beyond what was originally planned in the project. From a research perspective, all of the original research objectives of the project have been met and published in 4 journal articles, 13 peer-reviewed conference papers, and two book chapters......, documenting all of the project’s four objectives. All of these publication venues are top-tier journals and conferences within computer science. From a business perspective, the project had the objective of incorporating relevant parts of the ABC technology into the products of Medical Insight, which has been...

  17. Computational architecture for image processing on a small unmanned ground vehicle

    Science.gov (United States)

    Ho, Sean; Nguyen, Hung

    2010-08-01

    Man-portable Unmanned Ground Vehicles (UGVs) have been fielded on the battlefield with limited computing power. This limitation constrains their use primarily to teleoperation control mode for clearing areas and bomb defusing. In order to extend their capability to include the reconnaissance and surveillance missions of dismounted soldiers, a separate processing payload is desired. This paper presents a processing architecture and the design details on the payload module that enables the PackBot to perform sophisticated, real-time image processing algorithms using data collected from its onboard imaging sensors including LADAR, IMU, visible, IR, stereo, and the Ladybug spherical cameras. The entire payload is constructed from currently available Commercial off-the-shelf (COTS) components including an Intel multi-core CPU and a Nvidia GPU. The result of this work enables a small UGV to perform computationally expensive image processing tasks that once were only feasible on a large workstation.

  18. VLSI architectures for computing multiplications and inverses in GF(2-m)

    Science.gov (United States)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  19. VLSI architectures for computing multiplications and inverses in GF(2m)

    Science.gov (United States)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  20. A decoupled graph/computation data-driven architecture with variable-resolution actors

    Energy Technology Data Exchange (ETDEWEB)

    Evripidou, P. [University of Southern California, Marina Del Rey, CA (United States). Information Sciences Inst.; Gaudiot, J.L. [University of Southern California, Los Angeles, CA (United States). Dept. of Electrical Engineering

    1990-12-31

    This paper presents a hybrid multiprocessor architecture that combines the advantages of the dynamic data-flow principles of execution with those of the control-flow model of execution. Two major design ideas are utilized by the proposed model: asynchronous execution of graph and computation operations, and variable- resolution actors. The independence of the two main unites of the machine allows an efficient implementation of functional/data-flow principles with conventional, mature technology. The compiler generates graphs with variable-sized actors in order to match the characteristics of the application to the target machine. For instance, vector actors are proposed for many aspects of scientific computing, while lower resolution (Compound Macro Actors) or conversely higher resolution (atomic instruction actors) is used for unvectorizable programs.

  1. Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    CERN Document Server

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2011-01-01

    In this paper we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilised to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large scale computer. Photons in this system are continually recycled back into the preparation network, allowing for a arbitrarily deep 3D cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high frequency, deterministic photon sources.

  2. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  3. Advanced Computer Science on Internal Ballistics of Solid Rocket Motors

    Science.gov (United States)

    Shimada, Toru; Kato, Kazushige; Sekino, Nobuhiro; Tsuboi, Nobuyuki; Seike, Yoshio; Fukunaga, Mihoko; Daimon, Yu; Hasegawa, Hiroshi; Asakawa, Hiroya

    In this paper, described is the development of a numerical simulation system, what we call “Advanced Computer Science on SRM Internal Ballistics (ACSSIB)”, for the purpose of improvement of performance and reliability of solid rocket motors (SRM). The ACSSIB system is consisting of a casting simulation code of solid propellant slurry, correlation database of local burning-rate of cured propellant in terms of local slurry flow characteristics, and a numerical code for the internal ballistics of SRM, as well as relevant hardware. This paper describes mainly the objectives, the contents of this R&D, and the output of the fiscal year of 2008.

  4. Peer-to-peer architectures for exascale computing : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these

  5. Peer-to-peer architectures for exascale computing : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these

  6. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  7. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule.

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  8. Quantum Computation Based on Retarded and Advanced Propagation

    CERN Document Server

    Castagnoli, G C

    1997-01-01

    Computation is currently seen as a forward propagator that evolves (retards) a completely defined initial vector into a corresponding final vector. Initial and final vectors map the (logical) input and output of a reversible Boolean network respectively, whereas forward propagation maps a one-way propagation of logical implication, from input to output. Conversely, hard NP-complete problems are characterized by a two-way propagation of logical implication from input to output and vice versa, given that both are partly defined from the beginning. Logical implication can be propagated forward and backward in a computation by constructing the gate array corresponding to the entire reversible Boolean network and constraining output bits as well as input bits. The possibility of modeling the physical process undergone by such a network by using a retarded and advanced in time propagation scheme is investigated.

  9. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen

    2014-01-01

    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  10. Computational methods of the Advanced Fluid Dynamics Model

    Energy Technology Data Exchange (ETDEWEB)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  11. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  12. A novel sputtered Pd mesh architecture as an advanced electrocatalyst for highly efficient hydrogen production

    Science.gov (United States)

    de Lucas-Consuegra, Antonio; de la Osa, Ana R.; Calcerrada, Ana B.; Linares, José J.; Horwat, David

    2016-07-01

    This study reports the preparation, characterization and testing of a sputtered Pd mesh-like anode as an advanced electrocatalyst for H2 production from alkaline ethanol solutions in an Alkaline Membrane Electrolyzer (AEM). Pd anodic catalyst is prepared by magnetron sputtering technique onto a microfiber carbon paper support. Scanning Electron Microscopy images reveal that the used preparation technique enables to cover the surface of the carbon microfibers exposed to the Pd target, leading to a continuous network that also maintains part of the original carbon paper macroporosity. Such novel anodic architecture (organic binder free) presents an excellent electro-chemical performance, with a maximum current density of 700 mA cm-2 at 1.3 V, and, concomitantly, a large H2 production rate with low energy requirement compared to water electrolysis. Potassium hydroxide emerges as the best electrolyte, whereas temperature exerts the expected promotional effect up to 90 °C. On the other hand, a 1 mol L-1 ethanol solution is enough to guarantee an efficient fuel supply without any mass transfer limitation. The proposed system also demonstrates to remain stable over 150 h of operation along five consecutives cycles, producing highly pure H2 (99.999%) at the cathode and potassium acetate as the main anodic product.

  13. Energy footprint of advanced dense numerical linear algebra using tile algorithms on multicore architectures

    KAUST Repository

    Dongarra, Jack

    2012-11-01

    We propose to study the impact on the energy footprint of two advanced algorithmic strategies in the context of high performance dense linear algebra libraries: (1) mixed precision algorithms with iterative refinement allow to run at the peak performance of single precision floating-point arithmetic while achieving double precision accuracy and (2) tree reduction technique exposes more parallelism when factorizing tall and skinny matrices for solving over determined systems of linear equations or calculating the singular value decomposition. Integrated within the PLASMA library using tile algorithms, which will eventually supersede the block algorithms from LAPACK, both strategies further excel in performance in the presence of a dynamic task scheduler while targeting multicore architecture. Energy consumption measurements are reported along with parallel performance numbers on a dual-socket quad-core Intel Xeon as well as a quad-socket quad-core Intel Sandy Bridge chip, both providing component-based energy monitoring at all levels of the system, through the Power Pack framework and the Running Average Power Limit model, respectively. © 2012 IEEE.

  14. The Activity-Based Computing Project - A Software Architecture for Pervasive Computing Final Report

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    done. Moreover, partly based on the research done in the ABC project, the company Cetrea A/S has been founded, which incorporate ABC concepts and technologies in its products. The concepts of activity-based computing have also been researched in cooperation with IBM Research, and the ABC project has...... to delays in recruitment. This delay has not had any impact on the results obtain; on the contrary. From a research management point-of-view, the project has learned us several lessons, which are being incorporated into the management of current research project at ITU. The research on the ABC concepts...

  15. Advances in parallel computer technology for desktop atmospheric dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D. [Pacific Northwest National Lab., Richland, WA (United States); Allwine, K.J. [Allwine Enviornmental Serv., Richland, WA (United States)

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  16. Computer code applicability assessment for the advanced Candu reactor

    Energy Technology Data Exchange (ETDEWEB)

    Wren, D.J.; Langman, V.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd (Canada)

    2004-07-01

    AECL Technologies, the 100%-owned US subsidiary of Atomic Energy of Canada Ltd. (AECL), is currently the proponents of a pre-licensing review of the Advanced Candu Reactor (ACR) with the United States Nuclear Regulatory Commission (NRC). A key focus topic for this pre-application review is the NRC acceptance of the computer codes used in the safety analysis of the ACR. These codes have been developed and their predictions compared against experimental results over extended periods of time in Canada. These codes have also undergone formal validation in the 1990's. In support of this formal validation effort AECL has developed, implemented and currently maintains a Software Quality Assurance program (SQA) to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. This paper discusses the SQA program used to develop, qualify and maintain the computer codes used in ACR safety analysis, including the current program underway to confirm the applicability of these computer codes for use in ACR safety analyses. (authors)

  17. A comprehensive zero-copy architecture for high performance distributed Data Acquisition over advanced network technologies for the CMS experiment

    CERN Document Server

    Bauer, Gerry; Branson, James; Bukowiec, Sebastian Czeslaw; Chaze, Olivier; Cittolin, Sergio; Coarasa, J. A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Nunez-Barranco, C; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Spataru, Andrei Cristian; Stoeckli, Fabian; Sumorok, Konstanty

    2013-01-01

    This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of t...

  18. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  19. CaKernel – A Parallel Application Programming Framework for Heterogenous Computing Architectures

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz

    2011-01-01

    Full Text Available With the recent advent of new heterogeneous computing architectures there is still a lack of parallel problem solving environments that can help scientists to use easily and efficiently hybrid supercomputers. Many scientific simulations that use structured grids to solve partial differential equations in fact rely on stencil computations. Stencil computations have become crucial in solving many challenging problems in various domains, e.g., engineering or physics. Although many parallel stencil computing approaches have been proposed, in most cases they solve only particular problems. As a result, scientists are struggling when it comes to the subject of implementing a new stencil-based simulation, especially on high performance hybrid supercomputers. In response to the presented need we extend our previous work on a parallel programming framework for CUDA – CaCUDA that now supports OpenCL. We present CaKernel – a tool that simplifies the development of parallel scientific applications on hybrid systems. CaKernel is built on the highly scalable and portable Cactus framework. In the CaKernel framework, Cactus manages the inter-process communication via MPI while CaKernel manages the code running on Graphics Processing Units (GPUs and interactions between them. As a non-trivial test case we have developed a 3D CFD code to demonstrate the performance and scalability of the automatically generated code.

  20. Recent Advances in Computational Mechanics of the Human Knee Joint

    Directory of Open Access Journals (Sweden)

    M. Kazemi

    2013-01-01

    Full Text Available Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  1. Recent advances in computational mechanics of the human knee joint.

    Science.gov (United States)

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  2. Computer Algorithms and Architectures for Three-Dimensional Eddy-Current Nondestructive Evaluation. Volume 3. Chapters 6-11

    Science.gov (United States)

    1989-01-20

    mflC FILE. OOR SA/TR-2/89 A003: FINAL REPORT COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION CD...J., Ullman, J., The Design and Analysis of Computer Algorithms , Addison-Wesley Publishing Company, 1974. [A2] Anderson, B., Moore, J., Optimal...actual data. DC- 17 I I I I [All Aho, A., Hopcroft, J., Ullman, J., The Design and Analysis of Computer Algorithms , Addison-Wesley Publishing Company

  3. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  4. Block sparse Cholesky algorithms on advanced uniprocessor computers

    Energy Technology Data Exchange (ETDEWEB)

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  5. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    Abduljabbar, Mustafa

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  6. International conference on Advances in Intelligent Control and Innovative Computing

    CERN Document Server

    Castillo, Oscar; Huang, Xu; Intelligent Control and Innovative Computing

    2012-01-01

    In the lightning-fast world of intelligent control and cutting-edge computing, it is vitally important to stay abreast of developments that seem to follow each other without pause. This publication features the very latest and some of the very best current research in the field, with 32 revised and extended research articles written by prominent researchers in the field. Culled from contributions to the key 2011 conference Advances in Intelligent Control and Innovative Computing, held in Hong Kong, the articles deal with a wealth of relevant topics, from the most recent work in artificial intelligence and decision-supporting systems, to automated planning, modelling and simulation, signal processing, and industrial applications. Not only does this work communicate the current state of the art in intelligent control and innovative computing, it is also an illuminating guide to up-to-date topics for researchers and graduate students in the field. The quality of the contents is absolutely assured by the high pro...

  7. Toward a scalable quantum computing architecture with mixed species ion chains

    Science.gov (United States)

    Wright, John; Auchter, Carolyn; Chou, Chen-Kuan; Graham, Richard D.; Noel, Thomas W.; Sakrejda, Tomasz; Zhou, Zichao; Blinov, Boris B.

    2016-12-01

    We report on progress toward implementing mixed ion species quantum information processing for a scalable ion-trap architecture. Mixed species chains may help solve several problems with scaling ion-trap quantum computation to large numbers of qubits. Initial temperature measurements of linear Coulomb crystals containing barium and ytterbium ions indicate that the mass difference does not significantly impede cooling at low ion numbers. Average motional occupation numbers are estimated to be bar{n} ≈ 130 quanta per mode for chains with small numbers of ions, which is within a factor of three of the Doppler limit for barium ions in our trap. We also discuss generation of ion-photon entanglement with barium ions with a fidelity of F ≥ 0.84, which is an initial step towards remote ion-ion coupling in a more scalable quantum information architecture. Further, we are working to implement these techniques in surface traps in order to exercise greater control over ion chain ordering and positioning.

  8. Starloc (Sandia TARget LOcation Computer): A special-purpose computer architecture for target location within an image

    Energy Technology Data Exchange (ETDEWEB)

    Napolitano, L.M. Jr.; Bryson, P.R.; Berry, K.R.; Klapp, S.R.; Leeper, J.E.; Redinbo, G.R.

    1988-01-01

    Starloc (Sandia TARget LOcation Computer) is a special-purpose computer designed for target location in an image. It is now under development and when completed will process two 256 pixel by 256 pixel input images per second, recognizing targets within them regardless of target variations such as rotation, range, brightness, and angle of view. Starloc's basic architecture consists of ten pipelined processing stages (eight for Fast Fourier Transform operations and two for pixel by pixel operations) arranged in a ring-like structure. Within each stage are a controller, image memory, address generator, and register file with parallel floating-point processors. Starloc is designed to be fault tolerant by including two hot standby stages that can be switched into the data path when other stages fail and by incorporating a comprehensive set of error checkers. Using currently available 10 MFLOP (million floating-point operations per second) floating-point processors, Starloc will run at a sustained rate of 188 MFLOPS with 94% efficiency. At this rate, it is performing 36 complex 256 pixel by 256 pixel two-dimensional Fast Fourier Transforms per second. 5 refs., 7 figs.

  9. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    Science.gov (United States)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  10. A development architecture for serious games using BCI (brain computer interface) sensors.

    Science.gov (United States)

    Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun

    2012-11-12

    Games that use brainwaves via brain-computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.

  11. A Development Architecture for Serious Games Using BCI (Brain Computer Interface Sensors

    Directory of Open Access Journals (Sweden)

    Kyhyun Um

    2012-11-01

    Full Text Available Games that use brainwaves via brain–computer interface (BCI devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.

  12. From variability tolerance to approximate computing in parallel integrated architectures and accelerators

    CERN Document Server

    Rahimi, Abbas; Gupta, Rajesh K

    2017-01-01

    This book focuses on computing devices and their design at various levels to combat variability. The authors provide a review of key concepts with particular emphasis on timing errors caused by various variability sources. They discuss methods to predict and prevent, detect and correct, and finally conditions under which such errors can be accepted; they also consider their implications on cost, performance and quality. Coverage includes a comparative evaluation of methods for deployment across various layers of the system from circuits, architecture, to application software. These can be combined in various ways to achieve specific goals related to observability and controllability of the variability effects, providing means to achieve cross layer or hybrid resilience. · Covers challenges and opportunities in identifying microelectronic variability and the resulting errors at various layers in the system abstraction; · Enables readers to assess how various levels of circuit and system design can mitigate t...

  13. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  14. JPL control/structure interaction test bed real-time control computer architecture

    Science.gov (United States)

    Briggs, Hugh C.

    1989-01-01

    The Control/Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts - such as active structure - and new tools - such as combined structure and control optimization algorithm - and their verification in ground and possibly flight test. A focus mission spacecraft was designed based upon a space interferometer and is the basis for design of the ground test article. The ground test bed objectives include verification of the spacecraft design concepts, the active structure elements and certain design tools such as the new combined structures and controls optimization tool. In anticipation of CSI technology flight experiments, the test bed control electronics must emulate the computation capacity and control architectures of space qualifiable systems as well as the command and control networks that will be used to connect investigators with the flight experiment hardware. The Test Bed facility electronics were functionally partitioned into three units: a laboratory data acquisition system for structural parameter identification and performance verification; an experiment supervisory computer to oversee the experiment, monitor the environmental parameters and perform data logging; and a multilevel real-time control computing system. The design of the Test Bed electronics is presented along with hardware and software component descriptions. The system should break new ground in experimental control electronics and is of interest to anyone working in the verification of control concepts for large structures.

  15. SciDAC Advances and Applications in Computational Beam Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.

    2005-06-26

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.

  16. SciDAC advances and applications in computational beam dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, R [Lawrence Berkeley National Laboratory (United States); Abell, D [Tech-X Corporation (United States); Adelmann, A [Paul Scherrer Institute, (Switzerland); Amundson, J [Fermi National Accelerator Laboratory (United States); Bohn, C [Fermi National Accelerator Laboratory (United States); Cary, J [Tech-X Corporation (United States); Colella, P [Lawrence Berkeley National Laboratory (United States); Dechow, D [Tech-X Corporation (United States); Decyk, V [University of California at Los Angeles (United States); Dragt, A [University of Maryland (United States); Gerber, R [Lawrence Berkeley National Laboratory (United States); Habib, S [Los Alamos National Laboratory (United States); Higdon, D [Los Alamos National Laboratory (United States); Katsouleas, T [University of Southern California (United States); Ma, K-L [University of California at Davis (United States); McCorquodale, P [Lawrence Berkeley National Laboratory (United States); Mihalcea, D [Northern Illinois University (United States); Mitchell, C [University of Maryland (United States); Mori, W [University of California at Los Angeles (United States); Mottershead, C T [Los Alamos National Laboratory (United States); Neri, F [Los Alamos National Laboratory (United States); Pogorelov, I [Lawrence Berkeley National Laboratory (United States); Qiang, J [Lawrence Berkeley National Laboratory (United States); Samulyak, R [Brookhaven National Laboratory (United States); Serafini, D [Lawrence Berkeley National Laboratory (United States); Shalf, J [Lawrence Berkeley National Laboratory (United States); Siegerist, C [Lawrence Berkeley National Laboratory (United States); Spentzouris, P [Fermi National Accelerator Laboratory (United States); Stoltz, P [Tech-X Corporation (United States); Terzic, B [Northern Illinois University (United States); Venturini, M [Lawrence Berkeley National Laboratory (United States); Walstrom, P [Los Alamos National Laboratory (United States)

    2005-01-01

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators-which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook-are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this paper we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.

  17. Computational modeling, optimization and manufacturing simulation of advanced engineering materials

    CERN Document Server

    2016-01-01

    This volume presents recent research work focused in the development of adequate theoretical and numerical formulations to describe the behavior of advanced engineering materials.  Particular emphasis is devoted to applications in the fields of biological tissues, phase changing and porous materials, polymers and to micro/nano scale modeling. Sensitivity analysis, gradient and non-gradient based optimization procedures are involved in many of the chapters, aiming at the solution of constitutive inverse problems and parameter identification. All these relevant topics are exposed by experienced international and inter institutional research teams resulting in a high level compilation. The book is a valuable research reference for scientists, senior undergraduate and graduate students, as well as for engineers acting in the area of computational material modeling.

  18. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    Finesse is required while performing with many traditional musical instruments, as they are extremely responsive to human inputs. The violin is specifically examined here, as it excels at translating a performer’s gestures into sound in manners that evoke a wide range of affective qualities...... of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human......-Computer Interaction technologies for the violin are developed in order to allow musicians to explore new methods of creating music. Through this process, the aim is to provide musicians with control systems that let them transcend the interface itself, and focus on musically compelling performances....

  19. ''Beauty of Wholeness and Beauty of Partiality.'' New Terms Defining the Concept of Beauty in Architecture in Terms of Sustainability and Computer Aided Design

    Science.gov (United States)

    Farid, Ayman A.; Zaghloul, Weaam M.; Dewidar, Khaled M.

    2014-01-01

    The great shift in sustainability and computer aided design in the field of architecture caused a remarkable change in the architecture philosophy, new aspects of beauty and aesthetic values are being introduced, and traditional definitions for beauty cannot fully cover this aspects, which causes a gap between; new architecture works criticism and…

  20. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  1. Reliability of an interactive computer program for advance care planning.

    Science.gov (United States)

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

  2. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  3. Time-recursive computation and real-time parallel architectures, with application on the Modulated Lapped Transform

    Science.gov (United States)

    Frantzeskakis, Emmanuel N.; Baras, John S.; Liu, Kuo Juey R.

    1993-11-01

    In this paper, we establish an architectural framework for parallel time-recursive computation. We consider a class of linear operators that consists of the discrete time, time invariant, compactly supported, but otherwise arbitrary kernel functions. We specify the properties of the linear operators that can be implemented efficiently in a time-recursive way. Based on these properties, we develop a routine that produces a time-recursive architectural implementation for a given operator. This routine is instructive for the design of a CAD tool that will facilitate the architecture derivation. Using this background, we design an architecture for the Modulated Lapped Transform (commonly called Modified Discrete Cosine Transform), which has linear cost in operator counts.

  4. Creating science-driven computer architecture: A new patch to scientific leadership

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; McCurdy, C. William; Kramer, T.C.; Stevens, Rick; McCoy,Mike; Seager, Mark; Zacharia, Thomas; Bair, Ray; Studham, Scott; Camp, William; Leland, Robert; Morrison, John; Feiereisen, William

    2003-05-16

    We believe that it is critical for the future of high end computing in the United States to bring into existence a new class of computational capability that is optimal for science. In recent years scientific computing has increasingly become dependent on hardware that is designed and optimized for commercial applications. Science in this country has greatly benefited from the improvements in computers that derive from advances in microprocessors following Moore's Law, and a strategy of relying on machines optimized primarily for business applications. However within the last several years, in part because of the challenge presented by the appearance of the Japanese Earth Simulator, the sense has been growing in the scientific community that a new strategy is needed. A more aggressive strategy than reliance only on market forces driven by business applications is necessary in order to achieve a better alignment between the needs of scientific computing and the platforms available. The United States should undertake a program that will result in scientific computing capability that durably returns the advantage to American science, because doing so is crucial to the country's future. Such a strategy must also be sustainable. New classes of computer designs will not only revolutionize the power of supercomputing for science, but will also affect scientific computing at all scales. What is called for is the opening of a new frontier of scientific capability that will ensure that American science is greatly enabled in its pursuit of research in critical areas such as nanoscience, climate prediction, combustion, modeling in the life sciences, and fusion energy, as well as in meeting essential needs for national security. In this white paper we propose a strategy for accomplishing this mission, pursuing different directions of hardware development and deployment, and establishing a highly capable networking and grid infrastructure connecting these platforms to

  5. Architectures, Concepts and Technologies for Service Oriented Computing : proceedings of ACT4SOC 2010, 4th International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing in cnjunction with ICSOFT 2010, Athens, Greece, July 2010

    NARCIS (Netherlands)

    Sinderen, van Marten; Sapkota, Brahmananda

    2010-01-01

    This volume contains the proceedings of the Fourth International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC 2010), held on July 23 in Athens, Greece, in conjunction with the Fourth International Conference on Software and Data Technologies (ICSOFT 20

  6. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    Science.gov (United States)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  7. Control bandwidth improvements in GRAVITY fringe tracker by switching to a synchronous real time computer architecture

    Science.gov (United States)

    Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than

    2016-08-01

    The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be

  8. Advances in computer technology: impact on the practice of medicine.

    Science.gov (United States)

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease.

  9. Neuromorphic Computing, Architectures, Models, and Applications. A Beyond-CMOS Approach to Future Computing, June 29-July 1, 2016, Oak Ridge, TN

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Schuman, Catherine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hylton, Todd [Brain Corporation, San Diego, CA (United States); Li, Hai [Univ. of Pittsburgh, PA (United States); Pino, Robinson [US Dept. of Energy, Washington, DC (United States)

    2016-12-31

    The White House and Department of Energy have been instrumental in driving the development of a neuromorphic computing program to help the United States continue its lead in basic research into (1) Beyond Exascale—high performance computing beyond Moore’s Law and von Neumann architectures, (2) Scientific Discovery—new paradigms for understanding increasingly large and complex scientific data, and (3) Emerging Architectures—assessing the potential of neuromorphic and quantum architectures. Neuromorphic computing spans a broad range of scientific disciplines from materials science to devices, to computer science, to neuroscience, all of which are required to solve the neuromorphic computing grand challenge. In our workshop we focus on the computer science aspects, specifically from a neuromorphic device through an application. Neuromorphic devices present a very different paradigm to the computer science community from traditional von Neumann architectures, which raises six major questions about building a neuromorphic application from the device level. We used these fundamental questions to organize the workshop program and to direct the workshop panels and discussions. From the white papers, presentations, panels, and discussions, there emerged several recommendations on how to proceed.

  10. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    Science.gov (United States)

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the…

  11. 多线程计算模型、体系结构与编译技术%Multithreaded Computing Model,Architecture and Compiling Technique

    Institute of Scientific and Technical Information of China (English)

    林海波; 汤志忠

    2003-01-01

    Multithreading has been proposed as an efficient computing model for improving parallelism. It combinesadvantages of both dataflow architecture and von Neumann architecture,leading to high performance and efficiency.The-state-of-the-art multithreaded computing model includes Blocking thread and Non-blocking thread, the corre-sponded multithreaded architecting can be classified as Multiple Context Processor and Hybrid Architecture. Threadpartitioning is one of the most important compiling issues in multithreaded computing. The idea of multithreading willbe developed further on the move of architecture,compiling technique,and operating system.

  12. ANL/Star project: a new architecture for large scale theoretical physics computations

    Energy Technology Data Exchange (ETDEWEB)

    Rushton, A.M.

    1985-01-01

    The project reported consists of two phases, each of which has goals of substantial physics content on its own. In Phase 1, we have selected Star Technologies' ST-100 as the array processor for the prototype coupled system and have installed one on a Vax 11/750 host. Our goals with this system are to institute a substantial program in computational physics at Argonne based on the power provided by this system and thereby to gain experience with both the hardware and software architecture of the ST-100. In Phase II, we propose to build a prototype consisting of two coupled array processors with shared memory to prove that this design can achieve high speed and efficiency in a readily extensible and cost-effective manner. This will implement all of the hardware and software modifications necessary to extend this design to as many as 64 (or more) nodes. In our design, we seek to minimize the changes made in the standard system hardware and software; this drastically reduces the effort required by our group to implement such a design and enables us to more readily incorporate the companies' upgrades to the array processor. It should be emphasized that our design is intended as a special purpose system for theoretical calculations; however it can be efficiently applied to a surprisingly broad class of problems. I shall discuss first the architecture of the ST-100 and then the physics program being currently implemented on a single system. Finally the proposed design of the coupled system is presented.

  13. Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)

    2005-07-01

    The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.

  14. Canine tarsal architecture as revealed by high-resolution computed tomography.

    Science.gov (United States)

    Galateanu, G; Apelt, D; Aizenberg, I; Saragusty, J; Hildebrandt, T B

    2013-06-01

    Central tarsal bone (CTB) fractures are well documented and are a subject of increasing importance in human, equine and canine athletes although the mechanism of these fractures in dogs is not fully understood and an extrapolation from human medicine may not be accurate. This study reports the use of high-resolution computed tomography (CT) of 91 tarsal joints from 47 dogs to generate a more detailed in situ anatomical description of the CTB architecture in order to obtain a better understanding of the pathogenesis of CTB fractures in this species. The dogs studied represented a wide range of ages, breeds and levels of habitual physical activity and the angles of the tarsal joints studied ranged between maximal flexion (16.4°) and maximal extension (159.1°). Regardless of tarsal angle, the CTB articulated with the calcaneus exclusively at the level of its plantar process (PPCTB) in all dogs. The PPCTB presented two distinct parts in all dogs, a head and a neck. The calcaneus tended to rely on the PPCTB neck during flexion and on the PPCTB head during extension. This study describes new tarsal elements for the first time, including the calcaneal articular process, the fourth tarsal bone plantar articular process and the talar plantar prominence of the CTB. Based on calcaneo-PPCTB architecture, it is postulated that the PPCTB is a keystone structure and that at least some of CTB fractures in dogs could either commence at or are induced at this level due to the impingement forces exercised by the calcaneus.

  15. Quantitative Computed Tomography and image analysis for advanced muscle assessment

    Directory of Open Access Journals (Sweden)

    Kyle Joseph Edmunds

    2016-06-01

    Full Text Available Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration.

  16. The Optimization of Algorithms in the Process of Temporal Data Mining Using the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-09-01

    Full Text Available Considering the importance and usefulness of real time data mining, in recent years the concern of researchers to discover new hardware architectures that can manage and process large volumes of data has increased significantly. In this paper the performance of algorithms for temporal data mining that are implemented in the new Compute Unified Device Architecture (CUDA from the latest generation of graphics processing units (GPU will be analyzed and reviewed. The performance will be evaluated taking into account the type of algorithm, data access, the problems` size, the GPU’s processor generation, the number of threads processed

  17. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  18. Efficient rendering of digitally reconstructed radiographs on heterogeneous computing architectures using central slice theorem.

    Science.gov (United States)

    Abdellah, Marwan; Abdallah, Mohamed; Alzanati, Mohamed; Eldeib, Ayman

    2016-08-01

    Digitally reconstructed radiographs (DRRs) play a significant role in modern clinical radiation therapy. They are used to verify patient alignments during image guided therapies with 2D-3D image registration. The generation of DRRs can be implemented intuitively in O(N3) relying on direct volume rendering (DVR) methods, such as ray marching. This complexity imposes certain limitations on the rendering performance if high quality DRR images are needed. Those DRRs can be alternatively generated in the k-space using the central slice theorem in O(N2logN). Several rendering pipelines have been designed to create the DRRs in the k-space, but they were either limited to specific vendor or entail particular software requirements. We present a high performance implementation of a k-space-based DRR generation pipeline that is executable on various heterogeneous computing architectures using OpenCL. Our implementation generates a DRR for a 5123 CT volume in 6, 2.7 and 0.68 milli-seconds on a commodity CPU, mid-range and high-end GPUs respectively.

  19. The genetic architecture of heterochronsy as a quantitative trait: lessons from a computational model.

    Science.gov (United States)

    Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling

    2017-05-30

    Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Study of human performance in computer-aided architectural design: methods and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cuomo, D.L.

    1988-01-01

    The goal of this study was to develop a performance methodology which will be useful for evaluating human performance for different types of tasks on a given system and across different levels of complexity within a single task. To meet the above goals, performance measures that reflect meaningful changes in humans behavior during CAAD tasks were developed. These measures were based on models of human information processing. Two cognitively different architectural tasks formulated differed in terms of the stimulus-central processing component-response compatibility and the structuredness of their problem spaces. Methods of varying task complexity within each of these tasks were also developed to test the sensitivity of the performance measures across levels of complexity and to introduce variability into the humans design behavior. From the developed performance measures task complexity, type of task, and subjective effects on performance could be seen. It was also shown that some measures more directly reflected the computer-interaction aspects of the task while other measures reflected the cognitive design activity of the human.

  1. Fault Tolerant Architecture For A Fly-By-Light Flight Control Computer

    Science.gov (United States)

    Thompson, Kevin; Stipanovich, John; Smith, Brian; Reddy, Mahesh C.

    1990-02-01

    The next generation of flight control computers will utilize fiber optic technology to produce a fly-by-light flight control system. Optical transducers and optical fibers will take the place of electrical position transducers and wires, torsion bars, bell cranks, and cables. Applications for this fly-by-light technology include space launch vehicles, upperstages, space-craft, and commercial/military aircraft. Optical fibers are lighter than mechanical transmission media and unlike conven-tional wire transmissions are not susceptible to electromagnetic interference (EMI) and high energy emission sources. This paper will give an overview of a fault tolerant In-Line Monitored optical flight control system being developed at Boeing Aerospace & Electronics in Seattle, Washington. This system uses passive transducers with fiber optic interconnections which hold promises to virtually eliminate EMI threats to flight control system performance and flight safety and also provide significant weight savings. The main emphasis of this paper will be the In-Line Monitored architecture of the optical transducer system required for use in a fault tolerant flight control system.

  2. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    Science.gov (United States)

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.

  3. Open Computer Forensic Architecture a Way to Process Terabytes of Forensic Disk Images

    Science.gov (United States)

    Vermaas, Oscar; Simons, Joep; Meijer, Rob

    This chapter describes the Open Computer Forensics Architecture (OCFA), an automated system that dissects complex file types, extracts metadata from files and ultimately creates indexes on forensic images of seized computers. It consists of a set of collaborating processes, called modules. Each module is specialized in processing a certain file type. When it receives a so called 'evidence', the information that has been extracted so far about the file together with the actual data, it either adds new information about the file or uses the file to derive a new 'evidence'. All evidence, original and derived, is sent to a router after being processed by a particular module. The router decides which module should process the evidence next, based upon the metadata associated with the evidence. Thus the OCFA system can recursively process images until from every compound file the embedded files, if any, are extracted, all information that the system can derive, has been derived and all extracted text is indexed. Compound files include, but are not limited to, archive- and zip-files, disk images, text documents of various formats and, for example, mailboxes. The output of an OCFA run is a repository full of derived files, a database containing all extracted information about the files and an index which can be used when searching. This is presented in a web interface. Moreover, processed data is easily fed to third party software for further analysis or to be used in data mining or text mining-tools. The main advantages of the OCFA system are Scalability, it is able to process large amounts of data.

  4. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    Science.gov (United States)

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

  5. Recent advances in compartmentalized synthetic architectures as drug carriers, cell mimics and artificial organelles

    DEFF Research Database (Denmark)

    York-Durán, María José; Gallardo, Maria Godoy; Labay, Cédric Pierre

    2017-01-01

    significant research attention and these assemblies are proposed as candidate materials for a range of biomedical applications. In this Review article, the recent successes of multicompartment architectures as carriers for the delivery of therapeutic cargo or the creation of micro- and nanoreactors that mimic...

  6. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  7. CLOUD COMPUTING ARCHITECTURE FOR HIGHER EDUCATION IN THE THIRD WORLD COUNTRIES (REPUBLIC OF THE SUDAN AS MODEL

    Directory of Open Access Journals (Sweden)

    Mohmed Sirelkhtem Adrees

    2015-06-01

    Full Text Available The exponential growth in the volume of data and information lead to problems in management, controlling effective and high costs of storage operation, where organizations are having problems: data retrieval and preparation and backups, and other acts of data. Therefore seeking companies and business organizations at the present time to achieve the highest return on their investments in technology through the planning and implementation of virtualization technologies and cloud computing, in order to protect data and manage more effectively and efficiently. We find that the government funding for higher education is decreasing continuously in third world countries, and the education management stand for a set of challenges. Cloud computing can help to provide solutions for these challenges, they bring multiple solutions cannot be applied to regular IT models. This paper aims to discuss and analyzing: concepts of cloud computing, cloud computing models, cloud computing services, cloud computing Architecture and the main objective of this paper is to how to use and applied cloud computing Architecture in higher education, in third world countries, the republic of Sudan as a model.

  8. Advanced practice registered nurse usability testing of a tailored computer-mediated health communication program.

    Science.gov (United States)

    Lin, Carolyn A; Neafsey, Patricia J; Anderson, Elizabeth

    2010-01-01

    This study tested the usability of a touch-screen-enabled Personal Education Program with advanced practice RNs. The Personal Education Program is designed to enhance medication adherence and reduce adverse self-medication behaviors in older adults with hypertension. An iterative research process was used, which involved the use of (1) pretrial focus groups to guide the design of system information architecture, (2) two different cycles of think-aloud trials to test the software interface, and (3) post-trial focus groups to gather feedback on the think-aloud studies. Results from this iterative usability-testing process were used to systematically modify and improve the three Personal Education Program prototype versions-the pilot, prototype 1, and prototype 2. Findings contrasting the two separate think-aloud trials showed that APRN users rated the Personal Education Program system usability, system information, and system-use satisfaction at a moderately high level between trials. In addition, errors using the interface were reduced by 76%, and the interface time was reduced by 18.5% between the two trials. The usability-testing processes used in this study ensured an interface design adapted to APRNs' needs and preferences to allow them to effectively use the computer-mediated health-communication technology in a clinical setting.

  9. Feasibility of a Hybrid Brain-Computer Interface for Advanced Functional Electrical Therapy

    Directory of Open Access Journals (Sweden)

    Andrej M. Savić

    2014-01-01

    Full Text Available We present a feasibility study of a novel hybrid brain-computer interface (BCI system for advanced functional electrical therapy (FET of grasp. FET procedure is improved with both automated stimulation pattern selection and stimulation triggering. The proposed hybrid BCI comprises the two BCI control signals: steady-state visual evoked potentials (SSVEP and event-related desynchronization (ERD. The sequence of the two stages, SSVEP-BCI and ERD-BCI, runs in a closed-loop architecture. The first stage, SSVEP-BCI, acts as a selector of electrical stimulation pattern that corresponds to one of the three basic types of grasp: palmar, lateral, or precision. In the second stage, ERD-BCI operates as a brain switch which activates the stimulation pattern selected in the previous stage. The system was tested in 6 healthy subjects who were all able to control the device with accuracy in a range of 0.64–0.96. The results provided the reference data needed for the planned clinical study. This novel BCI may promote further restoration of the impaired motor function by closing the loop between the “will to move” and contingent temporally synchronized sensory feedback.

  10. Computer-aided tissue engineering: benefiting from the control over scaffold micro-architecture.

    Science.gov (United States)

    Tarawneh, Ahmad M; Wettergreen, Matthew; Liebschner, Michael A K

    2012-01-01

    Minimization schema in nature affects the material arrangements of most objects, independent of scale. The field of cellular solids has focused on the generalization of these natural architectures (bone, wood, coral, cork, honeycombs) for material improvement and elucidation into natural growth mechanisms. We applied this approach for the comparison of a set of complex three-dimensional (3D) architectures containing the same material volume but dissimilar architectural arrangements. Ball and stick representations of these architectures at varied material volumes were characterized according to geometric properties, such as beam length, beam diameter, surface area, space filling efficiency, and pore volume. Modulus, deformation properties, and stress distributions as contributed solely by architectural arrangements was revealed through finite element simulations. We demonstrated that while density is the greatest factor in controlling modulus, optimal material arrangement could result in equal modulus values even with volumetric discrepancies of up to 10%. We showed that at low porosities, loss of architectural complexity allows these architectures to be modeled as closed celled solids. At these lower porosities, the smaller pores do not greatly contribute to the overall modulus of the architectures and that a stress backbone is responsible for the modulus. Our results further indicated that when considering a deposition-based growth pattern, such as occurs in nature, surface area plays a large role in the resulting strength of these architectures, specifically for systems like bone. This completed study represents the first step towards the development of mathematical algorithms to describe the mechanical properties of regular and symmetric architectures used for tissue regenerative applications. The eventual goal is to create logical set of rules that can explain the structural properties of an architecture based solely upon its geometry. The information could

  11. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    Science.gov (United States)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  12. HONEI: A collection of libraries for numerical computations targeting multiple processor architectures

    Science.gov (United States)

    van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten

    2009-12-01

    We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the

  13. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    in a vast range of parameters. The new hardware architecture allows us to verify the existing theoretical models for the complexity estimation in linear cryptanalysis. The designed hardware architecture is realized on two Xilinx Virtex-6 XC6VLX240T FPGAs for smaller block lengths, and on RIVYERA platform...... with 128 Xilinx Spartan-3 XC3S5000 FPGAs for larger block lengths....

  14. Resolving ten MVNO issues with EPS architecture, VoLTE and advanced policy server

    OpenAIRE

    COPELAND, Rebecca; Crespi, Noel

    2011-01-01

    International audience; The numbers of MVNOs (Mobile Virtual Network Operator) are growing globally, but so do their operational and business issues. This paper identifies these issues and looks for remedies via the new 4G architecture and interfaces. The paper examines the "Full" MVNO model as a "Home" network in a pseudo roaming scenario (National Roaming), allowing MVNO to connect to multiple MNOs through the discovery and selection process, and to benefit from the access agnostic nature o...

  15. Some Hail 'Computational Science' as Biggest Advance Since Newton, Galileo.

    Science.gov (United States)

    Turner, Judith Axler

    1987-01-01

    Computational science is defined as science done on a computer. A computer can serve as a laboratory for researchers who cannot experiment with their subjects, and as a calculator for those who otherwise might need centuries to solve some problems mathematically. The National Science Foundation's support of supercomputers is discussed. (MLW)

  16. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  17. Some Hail 'Computational Science' as Biggest Advance Since Newton, Galileo.

    Science.gov (United States)

    Turner, Judith Axler

    1987-01-01

    Computational science is defined as science done on a computer. A computer can serve as a laboratory for researchers who cannot experiment with their subjects, and as a calculator for those who otherwise might need centuries to solve some problems mathematically. The National Science Foundation's support of supercomputers is discussed. (MLW)

  18. Oracle joins CERN Openlab to advance grid computing

    CERN Multimedia

    2003-01-01

    "CERN and Oracle Corporation today announced that Oracle is joining the CERN openlab for DataGrid applications to collaborate in creating new grid computing technologies and exploring new computing and data management solutions far beyond today's Internet-based computing" (1 page).

  19. An AmI-Based Software Architecture Enabling Evolutionary Computation in Blended Commerce: The Shopping Plan Application

    OpenAIRE

    Giuseppe D’Aniello; Matteo Gaeta; Vincenzo Loia; Francesco Orciuoli

    2015-01-01

    This work describes an approach to synergistically exploit ambient intelligence technologies, mobile devices, and evolutionary computation in order to support blended commerce or ubiquitous commerce scenarios. The work proposes a software architecture consisting of three main components: linked data for e-commerce, cloud-based services, and mobile apps. The three components implement a scenario where a shopping mall is presented as an intelligent environment in which customers use NFC capabil...

  20. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

    Science.gov (United States)

    Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel

    2016-01-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance