WorldWideScience

Sample records for split computer architecture

  1. Computer architecture technology trends

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. This year's edition of Computer Architecture Technology Trends analyses the trends which are taking place in the architecture of computing systems today. Due to the sheer number of different applications to which computers are being applied, there seems no end to the different adoptions which proliferate. There are, however, some underlying trends which appear. Decision makers should be aware of these trends when specifying architectures, particularly for future applications. This report is fully revised and updated and provides insight in

  2. Computer architecture a quantitative approach

    CERN Document Server

    Hennessy, John L

    2019-01-01

    Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.

  3. Digital design and computer architecture

    CERN Document Server

    Harris, David

    2010-01-01

    Digital Design and Computer Architecture is designed for courses that combine digital logic design with computer organization/architecture or that teach these subjects as a two-course sequence. Digital Design and Computer Architecture begins with a modern approach by rigorously covering the fundamentals of digital logic design and then introducing Hardware Description Languages (HDLs). Featuring examples of the two most widely-used HDLs, VHDL and Verilog, the first half of the text prepares the reader for what follows in the second: the design of a MIPS Processor. By the end of D

  4. A split accumulation gate architecture for silicon MOS quantum dots

    Science.gov (United States)

    Rochette, Sophie; Rudolph, Martin; Roy, Anne-Marie; Curry, Matthew; Ten Eyck, Gregory; Dominguez, Jason; Manginell, Ronald; Pluym, Tammy; King Gamble, John; Lilly, Michael; Bureau-Oxton, Chloé; Carroll, Malcolm S.; Pioro-Ladrière, Michel

    We investigate tunnel barrier modulation without barrier electrodes in a split accumulation gate architecture for silicon metal-oxide-semiconductor quantum dots (QD). The layout consists of two independent accumulation gates, one gate forming a reservoir and the other the QD. The devices are fabricated with a foundry-compatible, etched, poly-silicon gate stack. We demonstrate 4 orders of magnitude of tunnel-rate control between the QD and the reservoir by modulating the reservoir gate voltage. Last electron charging energies of app. 10 meV and tuning of the ST splitting in the range 100-200 ueV are observed in two different split gate layouts and labs. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a Lockheed-Martin Company, for the U. S. Department of Energy under Contract No. DE-AC04-94AL85000.

  5. A Heterogeneous Quantum Computer Architecture

    NARCIS (Netherlands)

    Fu, X.; Riesebos, L.; Lao, L.; Garcia Almudever, C.; Sebastiano, F.; Versluis, R.; Charbon, E.; Bertels, K.

    2016-01-01

    In this paper, we present a high level view of the heterogeneous quantum computer architecture as any future quantum computer will consist of both a classical and quantum computing part. The classical part is needed for error correction as well as for the execution of algorithms that contain both

  6. Computing architecture for autonomous microgrids

    Science.gov (United States)

    Goldsmith, Steven Y.

    2015-09-29

    A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

  7. Programmable architecture for quantum computing

    NARCIS (Netherlands)

    Chen, J.; Wang, L.; Charbon, E.; Wang, B.

    2013-01-01

    A programmable architecture called “quantum FPGA (field-programmable gate array)” (QFPGA) is presented for quantum computing, which is a hybrid model combining the advantages of the qubus system and the measurement-based quantum computation. There are two kinds of buses in QFPGA, the local bus and

  8. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  9. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  10. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  11. Fault Tolerant Computer Architecture

    CERN Document Server

    Sorin, Daniel

    2009-01-01

    For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes

  12. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  13. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  14. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  15. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  16. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.

    2011-06-03

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area, dealing with meshes with planar faces and meshes which allow multilayer constructions (which is related to discrete surfaces and their curvatures), triangles meshes with circle-packing properties (which is related to conformal uniformization), and with the paneling problem. We emphasize the combination of numerical optimization and geometric knowledge.

  17. Analysis of mobile fronthaul bandwidth and wireless transmission performance in split-PHY processing architecture.

    Science.gov (United States)

    Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro

    2016-01-25

    We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.

  18. Brain architecture: a design for natural computation.

    Science.gov (United States)

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.

  19. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  20. Computer programming and architecture the VAX

    CERN Document Server

    Levy, Henry

    2014-01-01

    Takes a unique systems approach to programming and architecture of the VAXUsing the VAX as a detailed example, the first half of this book offers a complete course in assembly language programming. The second describes higher-level systems issues in computer architecture. Highlights include the VAX assembler and debugger, other modern architectures such as RISCs, multiprocessing and parallel computing, microprogramming, caches and translation buffers, and an appendix on the Berkeley UNIX assembler.

  1. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  2. Split-based computation of majority-rule supertrees.

    Science.gov (United States)

    Kupczok, Anne

    2011-07-13

    Supertree methods combine overlapping input trees into a larger supertree. Here, I consider split-based supertree methods that first extract the split information of the input trees and subsequently combine this split information into a phylogeny. Well known split-based supertree methods are matrix representation with parsimony and matrix representation with compatibility. Combining input trees on the same taxon set, as in the consensus setting, is a well-studied task and it is thus desirable to generalize consensus methods to supertree methods. Here, three variants of majority-rule (MR) supertrees that generalize majority-rule consensus trees are investigated. I provide simple formulas for computing the respective score for bifurcating input- and supertrees. These score computations, together with a heuristic tree search minmizing the scores, were implemented in the python program PluMiST (Plus- and Minus SuperTrees) available from http://www.cibiv.at/software/plumist. The different MR methods were tested by simulation and on real data sets. The search heuristic was successful in combining compatible input trees. When combining incompatible input trees, especially one variant, MR(-) supertrees, performed well. The presented framework allows for an efficient score computation of three majority-rule supertree variants and input trees. I combined the score computation with a heuristic search over the supertree space. The implementation was tested by simulation and on real data sets and showed promising results. Especially the MR(-) variant seems to be a reasonable score for supertree reconstruction. Generalizing these computations to multifurcating trees is an open problem, which may be tackled using this framework.

  3. Brain architecture: A design for natural computation

    OpenAIRE

    Kaiser, Marcus

    2008-01-01

    Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and ...

  4. A computer architecture for intelligent machines

    Science.gov (United States)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  5. Fundamentals of computer architecture and design

    CERN Document Server

    Bindal, Ahmet

    2017-01-01

    This textbook provides semester-length coverage of computer architecture and design, providing a strong foundation for students to understand modern computer system architecture and to apply these insights and principles to future computer designs.  It is based on the author’s decades of industrial experience with computer architecture and design, as well as with teaching students focused on pursuing careers in computer engineering.  Unlike a number of existing textbooks for this course, this one focuses not only on CPU architecture, but also covers in great detail in system buses, peripherals and memories.This book teaches every element in a computing system in two steps.  First, it introduces the functionality of each topic (and subtopics) and then goes into “from-scratch design” of a particular digital block from its architectural specifications using timing diagrams.  The author describes how the data-path of a certain digital block is generated using timin g diagrams, a method which most textbo...

  6. Quantum computation architecture using optical tweezers

    DEFF Research Database (Denmark)

    Weitenberg, Christof; Kuhr, Stefan; Mølmer, Klaus

    2011-01-01

    We present a complete architecture for scalable quantum computation with ultracold atoms in optical lattices using optical tweezers focused to the size of a lattice spacing. We discuss three different two-qubit gates based on local collisional interactions. The gates between arbitrary qubits...... quantum computing....

  7. CAAD as Computer-Activated Architectural Design

    DEFF Research Database (Denmark)

    Galle, Per

    1998-01-01

    . On this background two alternative roles of computers in computer-aided architectural design (CAAD) are distinguished: a passive and a more active role, where in the latter case, the computer’s capacity for symbol manipulation is utilized to influence design thinking actively. The analysis offered in this paper may...

  8. Switching from Computer to Microcomputer Architecture Education

    Science.gov (United States)

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-01-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to…

  9. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  10. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  11. CITAstudio: Computation in Architecture 2015

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ayres, Phil

    2016-01-01

    , representational and material cultures. Through hands-on experimentation and production the programme emphasises learning-through-doing as a principle method for exploring computation as a means to pursue speculative design, experimental fabrication, material actuation and complex modelling....

  12. Splitting method for computing coupled hydrodynamic and structural response

    International Nuclear Information System (INIS)

    Ash, J.E.

    1977-01-01

    A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data

  13. Architectural Implications of Cloud Computing

    Science.gov (United States)

    2011-10-24

    applications • Google App Engine – Platform to develop and run applications on Google’s infrastructure • Microsoft Azure Services Platform – On-demand...compute and storage services as well as a development platform based on Windows Azure • Yahoo! Open Strategy (Y!OS) – Platform to develop and web...apps/intl/en/business/index.html • IBM Computing On Demand: http://www-03.ibm.com/systems/deepcomputing/cod/ • Microsoft Azure Services Platform

  14. The new landscape of parallel computer architecture

    Science.gov (United States)

    Shalf, John

    2007-07-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  15. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  16. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  17. Computing on Knights and Kepler Architectures

    International Nuclear Information System (INIS)

    Bortolotti, G; Caberletti, M; Ferraro, A; Giacomini, F; Manzali, M; Maron, G; Salomoni, D; Crimi, G; Zanella, M

    2014-01-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  18. Digital architecture, wearable computers and providing affinity

    DEFF Research Database (Denmark)

    Guglielmi, Michel; Johannesen, Hanne Louise

    2005-01-01

    will, through research, a workshop and participation in a cumulus competition, focus on the exploration of boundaries between digital architecture, performative space and wearable computers. Our design method in general focuses on the interplay between the performing body and the environment – between...

  19. Large computer systems and new architectures

    International Nuclear Information System (INIS)

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  20. Evaluation of Visual Computer Simulator for Computer Architecture Education

    Science.gov (United States)

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  1. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  2. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  3. Smart SOA platforms in cloud computing architectures

    CERN Document Server

    Exposito , Ernesto

    2014-01-01

    This book is intended to introduce the principles of the Event-Driven and Service-Oriented Architecture (SOA 2.0) and its role in the new interconnected world based on the cloud computing architecture paradigm. In this new context, the concept of "service" is widely applied to the hardware and software resources available in the new generation of the Internet. The authors focus on how current and future SOA technologies provide the basis for the smart management of the service model provided by the Platform as a Service (PaaS) layer.

  4. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    International Nuclear Information System (INIS)

    Larsen, R

    2008-01-01

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R and D including application of HA principles to power electronics systems

  5. Computer graphics in architecture and engineering

    Science.gov (United States)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  6. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  7. Roadmap to the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  8. Use of computed tomography assessed kidney length to predict split renal GFR in living kidney donors

    International Nuclear Information System (INIS)

    Gaillard, Francois; Fournier, Catherine; Leon, Carine; Legendre, Christophe; Pavlov, Patrik; Tissier, Anne-Marie; Correas, Jean-Michel; Harache, Benoit; Hignette, Chantal; Weinmann, Pierre; Eladari, Dominique; Timsit, Marc-Olivier; Mejean, Arnaud; Friedlander, Gerard; Courbebaisse, Marie; Houillier, Pascal

    2017-01-01

    Screening of living kidney donors may require scintigraphy to split glomerular filtration rate (GFR). To determine the usefulness of computed tomography (CT) to split GFR, we compared scintigraphy-split GFR to CT-split GFR. We evaluated CT-split GFR as a screening test to detect scintigraphy-split GFR lower than 40 mL/min/1.73 m 2 /kidney. This was a monocentric retrospective study on 346 potential living donors who had GFR measurement, renal scintigraphy, and CT. We predicted GFR for each kidney by splitting GFR using the following formula: Volume-split GFR for a given kidney = measured GFR*[volume of this kidney/(volume of this kidney + volume of the opposite kidney)]. The same formula was used for length-split GFR. We compared length- and volume-split GFR to scintigraphy-split GFR at donation and with a 4-year follow-up. A better correlation was observed between length-split GFR and scintigraphy-split GFR (r = 0.92) than between volume-split GFR and scintigraphy-split GFR (r = 0.89). A length-split GFR threshold of 45 mL/min/1.73 m 2 /kidney had a sensitivity of 100 % and a specificity of 75 % to detect scintigraphy-split GFR less than 40 mL/min/1.73 m 2 /kidney. Both techniques with their respective thresholds detected living donors with similar eGFR evolution during follow-up. Length-split GFR can be used to detect patients requiring scintigraphy. (orig.)

  9. Use of computed tomography assessed kidney length to predict split renal GFR in living kidney donors

    Energy Technology Data Exchange (ETDEWEB)

    Gaillard, Francois; Fournier, Catherine; Leon, Carine; Legendre, Christophe [Paris Descartes University, AP-HP, Hopital Necker-Enfants Malades, Renal Transplantation Department, Paris (France); Pavlov, Patrik [Linkoeping University, Linkoeping (Sweden); Tissier, Anne-Marie; Correas, Jean-Michel [Paris Descartes University, AP-HP, Hopital Necker-Enfants Malades, Radiology Department, Paris (France); Harache, Benoit; Hignette, Chantal; Weinmann, Pierre [Paris Descartes University, AP-HP, Hopital Europeen Georges Pompidou, Nuclear Medicine Department, Paris (France); Eladari, Dominique [Paris Descartes University, and INSERM, Unit 970, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France); Timsit, Marc-Olivier; Mejean, Arnaud [Paris Descartes University, AP-HP, Hopital Europeen Georges Pompidou, Urology Department, Paris (France); Friedlander, Gerard; Courbebaisse, Marie [Paris Descartes University, and INSERM, Unit 1151, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France); Houillier, Pascal [Paris Descartes University, INSERM, Unit umrs1138, and CNRS Unit erl8228, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France)

    2017-02-15

    Screening of living kidney donors may require scintigraphy to split glomerular filtration rate (GFR). To determine the usefulness of computed tomography (CT) to split GFR, we compared scintigraphy-split GFR to CT-split GFR. We evaluated CT-split GFR as a screening test to detect scintigraphy-split GFR lower than 40 mL/min/1.73 m{sup 2}/kidney. This was a monocentric retrospective study on 346 potential living donors who had GFR measurement, renal scintigraphy, and CT. We predicted GFR for each kidney by splitting GFR using the following formula: Volume-split GFR for a given kidney = measured GFR*[volume of this kidney/(volume of this kidney + volume of the opposite kidney)]. The same formula was used for length-split GFR. We compared length- and volume-split GFR to scintigraphy-split GFR at donation and with a 4-year follow-up. A better correlation was observed between length-split GFR and scintigraphy-split GFR (r = 0.92) than between volume-split GFR and scintigraphy-split GFR (r = 0.89). A length-split GFR threshold of 45 mL/min/1.73 m{sup 2}/kidney had a sensitivity of 100 % and a specificity of 75 % to detect scintigraphy-split GFR less than 40 mL/min/1.73 m{sup 2}/kidney. Both techniques with their respective thresholds detected living donors with similar eGFR evolution during follow-up. Length-split GFR can be used to detect patients requiring scintigraphy. (orig.)

  10. Compact, open-architecture computed radiography system

    International Nuclear Information System (INIS)

    Huang, H.K.; Lim, A.; Kangarloo, H.; Eldredge, S.; Loloyan, M.; Chuang, K.S.

    1990-01-01

    Computed radiography (CR) was introduced in 1982, and its basic system design has not changed. Current CR systems have certain limitations: spatial resolution and signal-to-noise ratios are lower than those of screen-film systems, they are complicated and expensive to build, and they have a closed architecture. The authors of this paper designed and implemented a simpler, lower-cost, compact, open-architecture CR system to overcome some of these limitations. The open-architecture system is a manual-load-single-plate reader that can fit on a desk top. Phosphor images are stored in a local disk and can be sent to any other computer through standard interfaces. Any manufacturer's plate can be read with a scanning time of 90 second for a 35 x 43-cm plate. The standard pixel size is 174 μm and can be adjusted for higher spatial resolution. The data resolution is 12 bits/pixel over an x-ray exposure range of 0.01-100 mR

  11. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  12. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  13. Developing a Distributed Computing Architecture at Arizona State University.

    Science.gov (United States)

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  14. A Computational Architecture for Programmable Automation Research

    Science.gov (United States)

    Taylor, Russell H.; Korein, James U.; Maier, Georg E.; Durfee, Lawrence F.

    1987-03-01

    This short paper describes recent work at the IBM T. J. Watson Research Center directed at developing a highly flexible computational architecture for research on sensor-based programmable automation. The system described here has been designed with a focus on dynamic configurability, layered user inter-faces and incorporation of sensor-based real time operations into new commands. It is these features which distinguish it from earlier work. The system is cur-rently being implemented at IBM for research purposes and internal use and is an outgrowth of programmable automation research which has been ongoing since 1972 [e.g., 1, 2, 3, 4, 5, 6] .

  15. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  16. Computer architecture evaluation for structural dynamics computations: Project summary

    Science.gov (United States)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  17. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  18. Unraveling the hydrodynamics of split root water uptake experiments using CT scanned root architectures and three dimensional flow simulations

    Directory of Open Access Journals (Sweden)

    Nicolai eKoebernick

    2015-05-01

    Full Text Available Split root experiments have the potential to disentangle water transport in roots and soil, enabling the investigation of the water uptake pattern of a root system. Interpretation of the experimental data assumes that water flow between the split soil compartments does not occur. Another approach to investigate root water uptake is by numerical simulations combining soil and root water flow depending on the parameterization and description of the root system. Our aim is to demonstrate the synergisms that emerge from combining split root experiments with simulations. We show how growing root architectures derived from temporally repeated X-ray CT scanning can be implemented in numerical soil-plant models. Faba beans were grown with and without split layers and exposed to a single drought period during which plant and soil water status were measured. Root architectures were reconstructed from CT scans and used in the model R-SWMS (root-soil water movement and solute transport to simulate water potentials in soil and roots in 3D as well as water uptake by growing roots in different depths. CT scans revealed that root development was considerably lower with split layers compared to without. This coincided with a reduction of transpiration, stomatal conductance and shoot growth. Simulated predawn water potentials were lower in the presence of split layers. Simulations showed that this was caused by an increased resistance to vertical water flow in the soil by the split layers. Comparison between measured and simulated soil water potentials proved that the split layers were not perfectly isolating and that redistribution of water from the lower, wetter compartments to the drier upper compartments took place, thus water losses were not equal to the root water uptake from those compartments. Still, the layers increased the resistance to vertical flow which resulted in lower simulated collar water potentials that led to reduced stomatal conductance and

  19. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  20. Teaching Computer Organization and Architecture Using Simulation and FPGA Applications

    OpenAIRE

    D. K.M. Al-Aubidy

    2007-01-01

    This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemente...

  1. Causes and prevention of splitting/bursting failure of concrete crossties: a computational study

    Science.gov (United States)

    2017-09-17

    Concrete splitting/bursting is a well-known failure mode of concrete crossties that can compromise the crosstie integrity and raise railroad maintenance and track safety concerns. This paper presents a computational study aimed at better understandin...

  2. Computer-Assisted Traffic Engineering Using Assignment, Optimal Signal Setting, and Modal Split

    Science.gov (United States)

    1978-05-01

    Methods of traffic assignment, traffic signal setting, and modal split analysis are combined in a set of computer-assisted traffic engineering programs. The system optimization and user optimization traffic assignments are described. Travel time func...

  3. On Architectural Acoustics Design using Computer Simulation

    DEFF Research Database (Denmark)

    Schmidt, Anne Marie Due; Kirkegaard, Poul Henning

    2004-01-01

    The acoustical quality of a given building, or space within the building, is highly dependent on the architectural design. Architectural acoustics design has in the past been based on simple design rules. However, with a growing complexity in the architectural acoustic and the emergence of potent...

  4. Computation, architectural design and fabrication logic

    DEFF Research Database (Denmark)

    Larsen, Niels Martin

    2016-01-01

    Digital fabrication and digital form generation can change the way different professions interact in relation to the development and construction of architecture. The technologies can provide a more integrated design process and expand the architectural vocabulary. At Aarhus School of Architecture...

  5. An analytical study of a new high performance computer architecture

    Science.gov (United States)

    Tung, Cheng-Hsien

    1993-08-01

    The need for large scale parallel computer systems has been intensified by the number of applications which require manipulation of large amount of data. Many scientific and engineering applications require high performance parallel computers to accomplish the computational tasks. This paper presents a multiple bus architecture capable of being scaled to massive dimension, to handle large amounts of data. The memory bandwidth of this architecture is analyzed. This architecture has low latency and high bus bandwidth. Also the architecture can enforce cache consistency through bus snooping cache coherence protocols.

  6. Polymorphous Computing Architecture (PCA) Application Benchmark 1: Three-Dimensional Radar Data Processing

    National Research Council Canada - National Science Library

    Lebak, J

    2001-01-01

    The DARPA Polymorphous Computing Architecture (PCA) program is building advanced computer architectures that can reorganize their computation and communication structures to achieve better overall application performance...

  7. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  8. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design

    International Nuclear Information System (INIS)

    Menges, Achim

    2012-01-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies. (paper)

  9. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.

    Science.gov (United States)

    Menges, Achim

    2012-03-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.

  10. A heterogeneous hierarchical architecture for real-time computing

    Energy Technology Data Exchange (ETDEWEB)

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  11. Memristor-based nanoelectronic computing circuits and architectures

    CERN Document Server

    Vourkas, Ioannis

    2016-01-01

    This book considers the design and development of nanoelectronic computing circuits, systems and architectures focusing particularly on memristors, which represent one of today’s latest technology breakthroughs in nanoelectronics. The book studies, explores, and addresses the related challenges and proposes solutions for the smooth transition from conventional circuit technologies to emerging computing memristive nanotechnologies. Its content spans from fundamental device modeling to emerging storage system architectures and novel circuit design methodologies, targeting advanced non-conventional analog/digital massively parallel computational structures. Several new results on memristor modeling, memristive interconnections, logic circuit design, memory circuit architectures, computer arithmetic systems, simulation software tools, and applications of memristors in computing are presented. High-density memristive data storage combined with memristive circuit-design paradigms and computational tools applied t...

  12. CAAD: Computer Architecture for Autonomous Driving

    OpenAIRE

    Liu, Shaoshan; Tang, Jie; Zhang, Zhe; Gaudiot, Jean-Luc

    2017-01-01

    We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.

  13. Integrated computer control system architectural overview

    Energy Technology Data Exchange (ETDEWEB)

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  14. Energy efficiency in Mobile Cloud Computing Architectures

    OpenAIRE

    Le Vinh, Thinh; Pallavali, Reddy; Houacine, Fatiha; Bouzefrane, Samia

    2016-01-01

    International audience; — Mobile Cloud Computing (MCC) is an emerging and popular mobile technology which uses fully available Cloud Computing services and functionalities. This technology provides rich computational services to the users, network operators and Cloud service providers as well. However due to users mobility and high computational operations, consumption of energy is a major issue. Energy efficiency over MCC is needed since 57% of generated energy is used by ICT related devices...

  15. A computational architecture for social agents

    Energy Technology Data Exchange (ETDEWEB)

    Bond, A.H. [California Institute of Technology, Pasadena, CA (United States)

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  16. The visual simulators for architecture and computer organization learning

    OpenAIRE

    Nikolić Boško; Grbanović Nenad; Đorđević Jovan

    2009-01-01

    The paper proposes a method of an effective distance learning of architecture and computer organization. The proposed method is based on a software system that is possible to be applied in any course in this field. Within this system students are enabled to observe simulation of already created computer systems. The system provides creation and simulation of switch systems, too.

  17. Mobility-Aware Modeling and Analysis of Dense Cellular Networks With $C$ -Plane/ $U$ -Plane Split Architecture

    KAUST Repository

    Ibrahim, Hazem

    2016-09-19

    The unrelenting increase in the population of mobile users and their traffic demands drive cellular network operators to densify their network infrastructure. Network densification shrinks the footprint of base stations (BSs) and reduces the number of users associated with each BS, leading to an improved spatial frequency reuse and spectral efficiency, and thus, higher network capacity. However, the densification gain comes at the expense of higher handover rates and network control overhead. Hence, user’s mobility can diminish or even nullifies the foreseen densification gain. In this context, splitting the control plane ( C -plane) and user plane ( U -plane) is proposed as a potential solution to harvest densification gain with reduced cost in terms of handover rate and network control overhead. In this paper, we use stochastic geometry to develop a tractable mobility-aware model for a two-tier downlink cellular network with ultra-dense small cells and C -plane/ U -plane split architecture. The developed model is then used to quantify the effect of mobility on the foreseen densification gain with and without C -plane/ U -plane split. To this end, we shed light on the handover problem in dense cellular environments, show scenarios where the network fails to support certain mobility profiles, and obtain network design insights.

  18. Computational and experimental studies of reassociating RNA/DNA hybrids containing split functionalities.

    Science.gov (United States)

    Afonin, Kirill A; Bindewald, Eckart; Kireeva, Maria; Shapiro, Bruce A

    2015-01-01

    Recently, we developed a novel technique based on RNA/DNA hybrid reassociation that allows conditional activation of different split functionalities inside diseased cells and in vivo. We further expanded this idea to permit simultaneous activation of multiple different functions in a fully controllable fashion. In this chapter, we discuss some novel computational approaches and experimental techniques aimed at the characterization, design, and production of reassociating RNA/DNA hybrids containing split functionalities. We also briefly describe several experimental techniques that can be used to test these hybrids in vitro and in vivo. 2015 Published by Elsevier Inc.

  19. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  20. Advanced Computing Architectures for Cognitive Processing

    Science.gov (United States)

    2009-07-01

    for HPRC applications. 4.2 MODELS OF COMPUTATION Ptolemy is a software framework developed at the University of California, Berkeley and is used for...mixing of different “models of computation”. A model of computation varies from another mainly in its notion of “time”. Ptolemy II is a JAVA-based...concurrent or sequential components. Ptolemy II includes a suite of domains, each of which realizes a model of computation. It also includes a

  1. Architecture independent environment for developing engineering software on MIMD computers

    Science.gov (United States)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  2. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...... been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware...... as a case study and an application of the Hybris graphics architecture....

  3. Design of Carborane Molecular Architectures via Electronic Structure Computations

    International Nuclear Information System (INIS)

    Oliva, J.M.; Serrano-Andres, L.; Klein, D.J.; Schleyer, P.V.R.; Mich, J.

    2009-01-01

    Quantum-mechanical electronic structure computations were employed to explore initial steps towards a comprehensive design of poly carborane architectures through assembly of molecular units. Aspects considered were (i) the striking modification of geometrical parameters through substitution, (ii) endohedral carboranes and proposed ejection mechanisms for energy/ion/atom/energy storage/transport, (iii) the excited state character in single and dimeric molecular units, and (iv) higher architectural constructs. A goal of this work is to find optimal architectures where atom/ion/energy/spin transport within carborane superclusters is feasible in order to modernize and improve future photo energy processes.

  4. MOMCC: Market-Oriented Architecture for Mobile Cloud Computing Based on Service Oriented Architecture

    OpenAIRE

    Abolfazli, Saeid; Sanaei, Zohreh; Gani, Abdullah; Shiraz, Muhammad

    2012-01-01

    The vision of augmenting computing capabilities of mobile devices, especially smartphones with least cost is likely transforming to reality leveraging cloud computing. Cloud exploitation by mobile devices breeds a new research domain called Mobile Cloud Computing (MCC). However, issues like portability and interoperability should be addressed for mobile augmentation which is a non-trivial task using component-based approaches. Service Oriented Architecture (SOA) is a promising design philosop...

  5. Endoleak detection using single-acquisition split-bolus dual-energy computer tomography (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Javor, D.; Wressnegger, A.; Unterhumer, S.; Kollndorfer, K.; Nolz, R.; Beitzke, D.; Loewe, C. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria)

    2017-04-15

    To assess a single-phase, dual-energy computed tomography (DECT) with a split-bolus technique and reconstruction of virtual non-enhanced images for the detection of endoleaks after endovascular aneurysm repair (EVAR). Fifty patients referred for routine follow-up post-EVAR CT and a history of at least one post-EVAR follow-up CT examination using our standard biphasic (arterial and venous phase) routine protocol (which was used as the reference standard) were included in this prospective trial. An in-patient comparison and an analysis of the split-bolus protocol and the previously used double-phase protocol were performed with regard to differences in diagnostic accuracy, radiation dose, and image quality. The analysis showed a significant reduction of radiation dose of up to 42 %, using the single-acquisition split-bolus protocol, while maintaining a comparable diagnostic accuracy (primary endoleak detection rate of 96 %). Image quality between the two protocols was comparable and only slightly inferior for the split-bolus scan (2.5 vs. 2.4). Using the single-acquisition, split-bolus approach allows for a significant dose reduction while maintaining high image quality, resulting in effective endoleak identification. (orig.)

  6. Memory architectures for exaflop computing systems

    OpenAIRE

    Pavlović, Milan

    2016-01-01

    Most computing systems are heavily dependent on their main memories, as their primary storage, or as an intermediate cache for slower storage systems (HDDs). The capacity of memory systems, as well as their performance, have a direct impact on overall computing capabilities of the system, and are also major contributors to its initial and operating costs. Dynamic Random Access Memory (DRAM) technology has been dominating the main memory landscape since its beginnings in 1970s until today. ...

  7. Computer architecture for solving consistent labeling problems

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, J.R.; Haralick, R.M.; Shapiro, L.G.

    1982-01-01

    Consistent labeling problems are a family of np-complete constraint satisfaction problems such as school timetabling, for which a conventional computer may be too slow. There are a variety of techniques for reducing the elapsed time to find one or all solutions to a consistent labeling problem. The paper discusses and illustrates solutions consisting of special hardware to accomplish the required constraint propagation and an asynchronous network of intercommunicating computers to accomplish the tree search in parallel. 5 references.

  8. Field-programmable custom computing technology architectures, tools, and applications

    CERN Document Server

    Luk, Wayne; Pocek, Ken

    2000-01-01

    Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  9. Layered Architectures for Quantum Computers and Quantum Repeaters

    Science.gov (United States)

    Jones, Nathan C.

    This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.

  10. Computational architecture for integrated controls and structures design

    Science.gov (United States)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    To facilitate the development of control structure interaction (CSI) design methodology, a computational architecture for interdisciplinary design of active structures is presented. The emphasis of the computational procedure is to exploit existing sparse matrix structural analysis techniques, in-core data transfer with control synthesis programs, and versatility in the optimization methodology to avoid unnecessary structural or control calculations. The architecture is designed such that all required structure, control and optimization analyses are performed within one program. Hence, the optimization strategy is not unduly constrained by cold starts of existing structural analysis and control synthesis packages.

  11. Centaure: an heterogeneous parallel architecture for computer vision

    International Nuclear Information System (INIS)

    Peythieux, Marc

    1997-01-01

    This dissertation deals with the architecture of parallel computers dedicated to computer vision. In the first chapter, the problem to be solved is presented, as well as the architecture of the Sympati and Symphonie computers, on which this work is based. The second chapter is about the state of the art of computers and integrated processors that can execute computer vision and image processing codes. The third chapter contains a description of the architecture of Centaure. It has an heterogeneous structure: it is composed of a multiprocessor system based on Analog Devices ADSP21060 Sharc digital signal processor, and of a set of Symphonie computers working in a multi-SIMD fashion. Centaure also has a modular structure. Its basic node is composed of one Symphonie computer, tightly coupled to a Sharc thanks to a dual ported memory. The nodes of Centaure are linked together by the Sharc communication links. The last chapter deals with a performance validation of Centaure. The execution times on Symphonie and on Centaure of a benchmark which is typical of industrial vision, are presented and compared. In the first place, these results show that the basic node of Centaure allows a faster execution than Symphonie, and that increasing the size of the tested computer leads to a better speed-up with Centaure than with Symphonie. In the second place, these results validate the choice of running the low level structure of Centaure in a multi- SIMD fashion. (author) [fr

  12. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  13. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  14. Experimental comparison of two quantum computing architectures.

    Science.gov (United States)

    Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher

    2017-03-28

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.

  15. Deep architectures for Human Computer Interaction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    In this work we present the application of Conditional Restricted Boltzmann Machines in Human Computer Interaction. These provide a well suited framework to model the complex temporal patterns produced from humans in the audio and video modalities. They can be trained in a semisupervised fashion and

  16. Architecture and design frame work for on board computer control ...

    African Journals Online (AJOL)

    Architecture and design frame work for on board computer control and data management of satellite systems. ECN Okafor, CE Okoro, JI Ejimanya. Abstract. No Abstract. International Journal of Natural and Applied Sciences Vol. 4 (3) 2008: pp. 139-145. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL ...

  17. Know Your Personal Computer-The CPU Base-Architecture

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 7. Know Your Personal Computer - The CPU Base-Architecture. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 7 July 1996 pp 15-22. Fulltext. Click here to view fulltext PDF. Permanent link:

  18. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  19. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  20. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  1. Computational Strategies for the Architectural Design of Bending Active Structures

    DEFF Research Database (Denmark)

    Tamke, Martin; Nicholas, Paul

    2013-01-01

    Active bending introduces a new level of integration into the design of architectural structures, and opens up new complexities for the architectural design process. In particular, the introduction of material variation reconfigures the design space. Through the precise specification...... of their stiffness, it is possible to control and pre-calibrate the bending behaviour of a composite element. This material capacity challenges architecture’s existing methods for design, specification and prediction. In this paper, we demonstrate how architects might connect the designed nature of composites...... with the design of bending-active structures, through computational strategies. We report three built structures that develop architecturally oriented design methods for bending-active systems using composite materials. These projects demonstrate the application and limits of the introduction of advanced...

  2. Computational Strategies for the Architectural Design of Bending Active Structures

    DEFF Research Database (Denmark)

    Tamke, Martin; Nicholas, Paul

    2013-01-01

    with the design of bending-active structures, through computational strategies. We report three built structures that develop architecturally oriented design methods for bending-active systems using composite materials. These projects demonstrate the application and limits of the introduction of advanced......Active bending introduces a new level of integration into the design of architectural structures, and opens up new complexities for the architectural design process. In particular, the introduction of material variation reconfigures the design space. Through the precise specification...... of their stiffness, it is possible to control and pre-calibrate the bending behaviour of a composite element. This material capacity challenges architecture’s existing methods for design, specification and prediction. In this paper, we demonstrate how architects might connect the designed nature of composites...

  3. Using EDUCache Simulator for the Computer Architecture and Organization Course

    Directory of Open Access Journals (Sweden)

    Sasko Ristov

    2013-07-01

    Full Text Available The computer architecture and organization course is essential in all computer science and engineering programs, and the most selected and liked elective course for related engineering disciplines. However, the attractiveness brings a new challenge, it requires a lot of effort by the instructor, to explain rather complicated concepts to beginners or to those who study related disciplines. The usage of visual simulators can improve both the teaching and learning processes. The overall goal is twofold: 1~to enable a visual environment to explain the basic concepts and 2~to increase the student's willingness and ability to learn the material.A lot of visual simulators have been used for the computer architecture and organization course. However, due to the lack of visual simulators for simulation of the cache memory concepts, we have developed a new visual simulator EDUCache simulator. In this paper we present that it can be effectively and efficiently used as a supporting tool in the learning process of modern multi-layer, multi-cache and multi-core multi-processors.EDUCache's features enable an environment for performance evaluation and engineering of software systems, i.e. the students will also understand the importance of computer architecture building parts and hopefully, will increase their curiosity for hardware courses in general.

  4. OS friendly microprocessor architecture: Hardware level computer security

    Science.gov (United States)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  5. Lightgrid-an agile distributed computing architecture for Geant4

    International Nuclear Information System (INIS)

    Young, Jason; Perry, John O.; Jevremovic, Tatjana

    2010-01-01

    A light weight grid based computing architecture has been developed to accelerate Geant4 computations on a variety of network architectures. This new software is called LightGrid. LightGrid has a variety of features designed to overcome current limitations on other grid based computing platforms, more specifically, smaller network architectures. By focusing on smaller, local grids, LightGrid is able to simplify the grid computing process with minimal changes to existing Geant4 code. LightGrid allows for integration between Geant4 and MySQL, which both increases flexibility in the grid as well as provides a faster, reliable, and more portable method for accessing results than traditional data storage systems. This unique method of data acquisition allows for more fault tolerant runs as well as instant results from simulations as they occur. The performance increases brought along by using LightGrid allow simulation times to be decreased linearly. LightGrid also allows for pseudo-parallelization with minimal Geant4 code changes.

  6. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  7. Nanotube devices based crossbar architecture: toward neuromorphic computing

    International Nuclear Information System (INIS)

    Zhao, W S; Gamrat, C; Agnus, G; Derycke, V; Filoramo, A; Bourgoin, J-P

    2010-01-01

    Nanoscale devices such as carbon nanotube and nanowires based transistors, memristors and molecular devices are expected to play an important role in the development of new computing architectures. While their size represents a decisive advantage in terms of integration density, it also raises the critical question of how to efficiently address large numbers of densely integrated nanodevices without the need for complex multi-layer interconnection topologies similar to those used in CMOS technology. Two-terminal programmable devices in crossbar geometry seem particularly attractive, but suffer from severe addressing difficulties due to cross-talk, which implies complex programming procedures. Three-terminal devices can be easily addressed individually, but with limited gain in terms of interconnect integration. We show how optically gated carbon nanotube devices enable efficient individual addressing when arranged in a crossbar geometry with shared gate electrodes. This topology is particularly well suited for parallel programming or learning in the context of neuromorphic computing architectures.

  8. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  9. Towards Energy-Centric Computing and Computer Architecture

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Technology forecasts indicate that device scaling will continue well into the next decade.  Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistors into performance due to a number of technological, circuit, architectural, methodological and  programming challenges.In this talk, I will argue that the key emerging showstopper is power.  Voltage scaling as a means to maintain a constant power envelope with an increase in transistor  numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look  for drastic measures to cut power. This is definitely the case for server chips in future datacenters, where abundant server parallelism, redundancy and 3D chip integration are likely to remove  programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present  results backing this argument based on validated models for f...

  10. Parallel algorithms and architecture for computation of manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  11. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  12. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  13. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  14. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  15. Contagious architecture: computation, aesthetics, and space (technologies of lived abstraction)

    CERN Document Server

    Parisi, Luciana

    2013-01-01

    In Contagious Architecture, Luciana Parisi offers a philosophical inquiry into the status of the algorithm in architectural and interaction design. Her thesis is that algorithmic computation is not simply an abstract mathematical tool but constitutes a mode of thought in its own right, in that its operation extends into forms of abstraction that lie beyond direct human cognition and control. These include modes of infinity, contingency, and indeterminacy, as well as incomputable quantities underlying the iterative process of algorithmic processing. The main philosophical source for the project is Alfred North Whitehead, whose process philosophy is specifically designed to provide a vocabulary for "modes of thought" exhibiting various degrees of autonomy from human agency even as they are mobilized by it. Because algorithmic processing lies at the heart of the design practices now reshaping our world -- from the physical spaces of our built environment to the networked spaces of digital culture -- the nature o...

  16. Computational Swarming: A Cultural Technique for Generative Architecture

    Directory of Open Access Journals (Sweden)

    Sebastian Vehlken

    2014-11-01

    Full Text Available After a first wave of digital architecture in the 1990s, the last decade saw some approaches where agent-based modelling and simulation (ABM was used for generative strategies in architectural design. By taking advantage of the self-organisational capabilities of computational agent collectives whose global behaviour emerges from the local interaction of a large number of relatively simple individuals (as it does, for instance, in animal swarms, architects are able to understand buildings and urbanscapes in a novel way as complex spaces that are constituted by the movement of multiple material and informational elements. As a major, zoo-technological branch of ABM, Computational Swarm Intelligence (SI coalesces all kinds of architectural elements – materials, people, environmental forces, traffic dynamics, etc. – into a collective population. Thereby, SI and ABM initiate a shift from geometric or parametric planning to time-based and less prescriptive software tools.Agent-based applications of this sort are used to model solution strategies in a number of areas where opaque and complex problems present themselves – from epidemiology to logistics, and from market simulations to crowd control. This article seeks to conceptualise SI and ABM as a fundamental and novel cultural technique for governing dynamic processes, taking their employment in generative architectural design as a concrete example. In order to avoid a rather conventional application of philosophical theories to this field, the paper explores how the procedures of such technologies can be understood in relation to the media-historical concept of Cultural Techniques.

  17. DIRAC: A Scalable Lightweight Architecture for High Throughput Computing

    CERN Document Server

    Garonne, V; Stokes-Rees, I

    2004-01-01

    DIRAC (Distributed Infrastructure with Remote Agent Control) has been developed by the CERN LHCb physics experiment to facilitate large scale simulation and user analysis tasks spread across both grid and non-grid computing resources. It consists of a small set of distributed stateless Core Services, which are centrally managed, and Agents which are managed by each computing site. DIRAC utilizes concepts from existing distributed computing models to provide a lightweight, robust, and flexible system. This paper will discuss the architecture, performance, and implementation of the DIRAC system which has recently been used for an intensive physics simulation involving more than forty sites, 90 TB of data, and in excess of one thousand 1 GHz processor-years.

  18. Client-server computer architecture saves costs and eliminates bottlenecks

    International Nuclear Information System (INIS)

    Darukhanavala, P.P.; Davidson, M.C.; Tyler, T.N.; Blaskovich, F.T.; Smith, C.

    1992-01-01

    This paper reports that workstation, client-server architecture saved costs and eliminated bottlenecks that BP Exploration (Alaska) Inc. experienced with mainframe computer systems. In 1991, BP embarked on an ambitious project to change technical computing for its Prudhoe Bay, Endicott, and Kuparuk operations on Alaska's North Slope. This project promised substantial rewards, but also involved considerable risk. The project plan called for reservoir simulations (which historically had run on a Cray Research Inc. X-MP supercomputer in the company's Houston data center) to be run on small computer workstations. Additionally, large Prudhoe Bay, Endicott, and Kuparuk production and reservoir engineering data bases and related applications also would be moved to workstations, replacing a Digital Equipment Corp. VAX cluster in Anchorage

  19. Biomorphic Multi-Agent Architecture for Persistent Computing

    Science.gov (United States)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  20. ARCHITECTURE OF WEB BASED COMPUTER-AIDED MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    N. E. Filyukov

    2014-09-01

    Full Text Available The paper deals with design of a web-based system for Computer-Aided Manufacturing (CAM. Remote applications and databases located in the "private cloud" are proposed to be the basis of such system. The suggested approach contains: service - oriented architecture, using web applications and web services as modules, multi-agent technologies for implementation of information exchange functions between the components of the system and the usage of PDM - system for managing technology projects within the CAM. The proposed architecture involves CAM conversion into the corporate information system that will provide coordinated functioning of subsystems based on a common information space, as well as parallelize collective work on technology projects and be able to provide effective control of production planning. A system has been developed within this architecture which gives the possibility for a rather simple technological subsystems connect to the system and implementation of their interaction. The system makes it possible to produce CAM configuration for a particular company on the set of developed subsystems and databases specifying appropriate access rights for employees of the company. The proposed approach simplifies maintenance of software and information support for CAM subsystems due to their central location in the data center. The results can be used as a basis for CAM design and testing within the learning process for development and modernization of the system algorithms, and then can be tested in the extended enterprise.

  1. Exploring Hardware-Based Primitives to Enhance Parallel Security Monitoring in a Novel Computing Architecture

    National Research Council Canada - National Science Library

    Mott, Stephen

    2007-01-01

    .... In doing this, we propose a novel computing architecture, derived from a contemporary shared memory architecture, that facilitates efficient security-related monitoring in real-time, while keeping...

  2. Investigation of Various Mesh Architectures With Broadcast Buses for High-Performance Computing

    OpenAIRE

    Sotirios G. Ziavras

    1999-01-01

    Extensive comparative analysis is carried out of various mesh-connected architectures that contain sparse broadcast buses for low-cost, high-performance parallel computing. The two basic architectures differ in the implementation of bus intersections. The first architecture simply allows row/column bus crossovers, whereas the second architecture implements such intersections with switches that introduce further flexibility. Both architectures have lower cost than the mesh with multiple broadc...

  3. Computing Architecture of the ALICE Detector Control System

    CERN Document Server

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S

    2011-01-01

    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  4. An ATLAS distributed computing architecture for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for LHC Run-4. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prot...

  5. Silicon CMOS architecture for a spin-based quantum computer.

    Science.gov (United States)

    Veldhorst, M; Eenink, H G J; Yang, C H; Dzurak, A S

    2017-12-15

    Recent advances in quantum error correction codes for fault-tolerant quantum computing and physical realizations of high-fidelity qubits in multiple platforms give promise for the construction of a quantum computer based on millions of interacting qubits. However, the classical-quantum interface remains a nascent field of exploration. Here, we propose an architecture for a silicon-based quantum computer processor based on complementary metal-oxide-semiconductor (CMOS) technology. We show how a transistor-based control circuit together with charge-storage electrodes can be used to operate a dense and scalable two-dimensional qubit system. The qubits are defined by the spin state of a single electron confined in quantum dots, coupled via exchange interactions, controlled using a microwave cavity, and measured via gate-based dispersive readout. We implement a spin qubit surface code, showing the prospects for universal quantum computation. We discuss the challenges and focus areas that need to be addressed, providing a path for large-scale quantum computing.

  6. Computational fluid dynamics study of the variable-pitch split-blade fan concept

    Science.gov (United States)

    Kepler, C. E.; Elmquist, A. R.; Davis, R. L.

    1992-01-01

    A computational fluid dynamics study was conducted to evaluate the feasibility of the variable-pitch split-blade supersonic fan concept. This fan configuration was conceived as a means to enable a supersonic fan to switch from the supersonic through-flow type of operation at high speeds to a conventional fan with subsonic inflow and outflow at low speeds. During this off-design, low-speed mode of operation, the fan would operate with a substantial static pressure rise across the blade row like a conventional transonic fan; the front (variable-pitch) blade would be aligned with the incoming flow, and the aft blade would remain fixed in the position set by the supersonic design conditions. Because of these geometrical features, this low speed configuration would inherently have a large amount of turning and, thereby, would have the potential for a large total pressure increase in a single stage. Such a high-turning blade configuration is prone to flow separation; it was hoped that the channeling of the flow between the blades would act like a slotted wing and help alleviate this problem. A total of 20 blade configurations representing various supersonic and transonic configurations were evaluated using a Navier Stokes CFD program called ADAPTNS because of its adaptive grid features. The flow fields generated by this computational procedure were processed by another data reduction program which calculated average flow properties and simulated fan performance. These results were employed to make quantitative comparisons and evaluations of blade performance. The supersonic split-blade configurations generated performance comparable to a single-blade supersonic, through-flow fan configuration. Simulated rotor total pressure ratios of the order of 2.5 or better were achieved for Mach 2.0 inflow conditions. The corresponding fan efficiencies were approximately 75 percent or better. The transonic split-blade configurations having large amounts of turning were able to

  7. Using a software-defined computer in teaching the basics of computer architecture and operation

    Science.gov (United States)

    Kosowska, Julia; Mazur, Grzegorz

    2017-08-01

    The paper describes the concept and implementation of SDC_One software-defined computer designed for experimental and didactic purposes. Equipped with extensive hardware monitoring mechanisms, the device enables the students to monitor the computer's operation on bus transfer cycle or instruction cycle basis, providing the practical illustration of basic aspects of computer's operation. In the paper, we describe the hardware monitoring capabilities of SDC_One and some scenarios of using it in teaching the basics of computer architecture and microprocessor operation.

  8. Computer aided design of architecture of degradable tissue engineering scaffolds.

    Science.gov (United States)

    Heljak, M K; Kurzydlowski, K J; Swieszkowski, W

    2017-11-01

    One important factor affecting the process of tissue regeneration is scaffold stiffness loss, which should be properly balanced with the rate of tissue regeneration. The aim of the research reported here was to develop a computer tool for designing the architecture of biodegradable scaffolds fabricated by melt-dissolution deposition systems (e.g. Fused Deposition Modeling) to provide the required scaffold stiffness at each stage of degradation/regeneration. The original idea presented in the paper is that the stiffness of a tissue engineering scaffold can be controlled during degradation by means of a proper selection of the diameter of the constituent fibers and the distances between them. This idea is based on the size-effect on degradation of aliphatic polyesters. The presented computer tool combines a genetic algorithm and a diffusion-reaction model of polymer hydrolytic degradation. In particular, we show how to design the architecture of scaffolds made of poly(DL-lactide-co-glycolide) with the required Young's modulus change during hydrolytic degradation.

  9. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Sleep architecture of consolidated and split sleep due to the dawn (Fajr prayer among Muslims and its impact on daytime sleepiness

    Directory of Open Access Journals (Sweden)

    Ahmed S BaHammam

    2012-01-01

    Full Text Available Background: Muslims are required to wake up early to pray (Fajr at dawn (approximately one and one-half hours before sunrise. Some Muslims wake up to pray Fajr and then sleep until it is time to work (split sleep, whereas others sleep continuously (consolidated sleep until work time and pray Fajr upon awakening. Aim: To objectively assess sleep architecture and daytime sleepiness in consolidated and split sleep due to the Fajr prayer. Setting and Design: A cross-sectional, single-center observational study in eight healthy male subjects with a mean age of 32.0 ± 2.4 years. Methods: The participants spent three nights in the Sleep Disorders Center (SDC at King Khalid University Hospital, where they participated in the study, which included (1 a medical checkup and an adaptation night, (2 a consolidated sleep night, and (3 a split-sleep night. Polysomnography (PSG was conducted in the SDC following the standard protocol. Participants went to bed at 11:30 PM and woke up at 7:00 AM in the consolidated sleep protocol. In the split-sleep protocol, participants went to bed at 11:30 PM, woke up at 3:30 AM for 45 minutes, went back to bed at 4:15 AM, and finally woke up at 7:45 AM. PSG was followed by a multiple sleep latency test to assess the daytime sleepiness of the participants. Results: There were no differences in sleep efficiency, the distribution of sleep stages, or daytime sleepiness between the two protocols. Conclusion: No differences were detected in sleep architecture or daytime sleepiness in the consolidated and split-sleep schedules when the total sleep duration was maintained.

  11. An Overview of the Most Important Reference Architectures for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Razvan Daniel ZOTA

    2014-01-01

    Full Text Available In this paper we have presented the main characteristics of the most important reference archi-tectures designed for the cloud computing environment. Specifically, we have introduced the proposed architectures of the worldwide cloud computing companies like Cisco, IBM and VMware and we also had a look at the National Institute of Standards and Technology (NIST reference architecture which is the starting point for all proposed architectures in the field. As one would expect, the provider dependent reference architectures are written is such a way to suit the services and products of the company, while NIST’s architecture is a more general model with more comprehensive architectural details that we highlighted in this article. In the end of the article we draw out some conclusions regarding the existing reference architectures for cloud computing.

  12. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  13. Optimizations of Unstructured Aerodynamics Computations for Many-core Architectures

    KAUST Repository

    Al Farhan, Mohammed Ahmed

    2018-04-13

    We investigate several state-of-the-practice shared-memory optimization techniques applied to key routines of an unstructured computational aerodynamics application with irregular memory accesses. We illustrate for the Intel KNL processor, as a representative of the processors in contemporary leading supercomputers, identifying and addressing performance challenges without compromising the floating point numerics of the original code. We employ low and high-level architecture-specific code optimizations involving thread and data-level parallelism. Our approach is based upon a multi-level hierarchical distribution of work and data across both the threads and the SIMD units within every hardware core. On a 64-core KNL chip, we achieve nearly 2.9x speedup of the dominant routines relative to the baseline. These exhibit almost linear strong scalability up to 64 threads, and thereafter some improvement with hyperthreading. At substantially fewer Watts, we achieve up to 1.7x speedup relative to the performance of 72 threads of a 36-core Haswell CPU and roughly equivalent performance to 112 threads of a 56-core Skylake scalable processor. These optimizations are expected to be of value for many other unstructured mesh PDE-based scientific applications as multi and many-core architecture evolves.

  14. On Computational Fluid Dynamics Tools in Architectural Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Hougaard, Mads; Stærdahl, Jesper Winther

    engineering computational fluid dynamics (CFD) simulation program ANSYS CFX and a CFD based representative program RealFlow are investigated. These two programs represent two types of CFD based tools available for use during phases of an architectural design process. However, as outlined in two case studies...... the durability of the two program types for simulation of flow is strongly depended of the purpose. One case presents results obtained with the programs with respect to the accuracy and physical behaviour of the flow. Another case deals with wind flow around a complex building design, the roof of the new Utzon...... Centre in Aalborg, Denmark. The obtained results show that detailed and accurate flow predictions can be obtained using a simulation tool like ANSYS CFX. On the other hand RealFlow provides satisfactory flow results for evaluation of a proposed building shape in an early phase of a design process...

  15. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  16. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  17. Program partitioning and scheduling for NUMA computer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Wolski, R.M.

    1994-03-01

    To effect the parallel execution of a program on a multiprocessor, each of the program`s constituent computations must be assigned to a processing resource within the multiprocessor. The problem of making this assignment so that execution time is minimized (known as the mapping problem) has been shown to be NP-complete. However, heuristics based on the performance characteristics of the target multiprocessor can yield execution times that approach the minimum possible. The mapping problem can be divided in to the problem of partitioning the computations into sequential threads, and the problem of scheduling those threads on the processors of the target system. This dissertation presents a logical framework and a set of heuristics that operate within the framework for the automatic partitioning and scheduling of programs at compile-time. The framework is based on the memory-node execution model which correctly captures the interaction between computations, processors, and the communication resources within a multiprocessor. The CP and HEF heuristics manipulate the features of the memory-node model to produce efficient program mappings. The effectiveness of the partitioning and scheduling techniques is investigated for Non-uniform Memory Access (NUMA) architecture types. To test the versatility of the approach, results are presented both for processors implementing strict execution semantics, and non-strict load/store semantics popular with RISC systems. The partitioner and scheduler are also used to investigate the possible advantages of multithreading (using either hardware or software), and the effectiveness of massively parallel systems, within a scientific programming context.

  18. Locating abnormalities in brain blood vessels using parallel computing architecture.

    Science.gov (United States)

    Adeshina, A M; Hashim, R; Khalid, N E A; Abidin, S Z Z

    2012-09-01

    CT and MRI scans are widely used in medical diagnosis procedures, but they only produce 2-D images. However, the human anatomical structure, the abnormalities, tumors, tissues and organs are in 3-D. 2-D images from these devices are difficult to interpret because they only show cross-sectional views of the human structure. Consequently, such circumstances require doctors to use their expert experiences in the interpretation of the possible location, size or shape of the abnormalities, even for large datasets of enormous amount of slices. Previously, the concept of reconstructing 2-D images to 3-D was introduced. However, such reconstruction model requires high performance computation, may either be time-consuming or costly. Furthermore, detecting the internal features of human anatomical structure, such as the imaging of the blood vessels, is still an open topic in the computer-aided diagnosis of disorders and pathologies. This paper proposes a volume visualization framework using Compute Unified Device Architecture (CUDA), augmenting the widely proven ray casting technique in terms of superior qualities of images but with slow speed. Considering the rapid development of technology in the medical community, our framework is implemented on Microsoft.NET environment for easy interoperability with other emerging revolutionary tools. The framework was evaluated with brain datasets from the department of Surgery, University of North Carolina, United States, containing around 109 MRA datasets. Uniquely, at a reasonably cheaper cost, our framework achieves immediate reconstruction and obvious mappings of the internal features of human brain, reliable enough for instantaneous locations of possible blockages in the brain blood vessels.

  19. A Client-Server Architecture for an Instructional Environment Based on Computer Networks and the Internet.

    Science.gov (United States)

    Guidon, Jacques; Pierre, Samuel

    1996-01-01

    Discusses the use of computers in education and training and proposes a client-server architecture for an experimental computer environment as an approach to a virtual classroom. Highlights include the World Wide Web and client software, document delivery, hardware architecture, and Internet resources and services. (Author/LRW)

  20. PHENIX On-Line Distributed Computing System Architecture

    International Nuclear Information System (INIS)

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-01-01

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (''granules'') that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes

  1. Distributed or Monolithic? A Computational Architecture Decision Framework

    OpenAIRE

    Mosleh, Mohsen; Dalili, Kia; Heydari, Babak

    2016-01-01

    Distributed architectures have become ubiquitous in many complex technical and socio-technical systems because of their role in improving uncertainty management, accommodating multiple stakeholders, and increasing scalability and evolvability. This departure from monolithic architectures provides a system with more flexibility and robustness in response to uncertainties that it may confront during its lifetime. Distributed architecture does not provide benefits only, as it can increase cost a...

  2. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition.

    Science.gov (United States)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-24

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today's electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  3. Architectures, Concepts and Architectures for Service Oriented Computing : proceedings of the 1st International Workshop - ACT4SOC 2007

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Unknown, [Unknown

    2007-01-01

    This volume contains the proceedings of the First International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC 2007), held on July 22 in Barcelona, Spain, in conjunction with the Second International Conference on Software and Data Technologies (ICSOFT

  4. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation?

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on a strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers. (orig.)

  5. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers

  6. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  7. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    Science.gov (United States)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  8. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  9. The Architecture for Computer Game‘s Engine

    OpenAIRE

    Kaulakis, Jonas

    2006-01-01

    Game engine is a set of supporting tools and services for game development. It is a component designed for reuse in different games. Therefore it is very important for game engine to be designed properly as for any successfully used reusable component. The main objective in this research is to present flexible and easily extensible architectural solution suitable for the game engine, based on the analysis of today’s game engine context and existing software architecture design. During the ana...

  10. Open Architecture in Naval Combat System Computing of the 21st Century: Network-Centric Applications

    Science.gov (United States)

    2003-06-01

    also take advantage of other Navy, as well as other-service, open architecture initiatives. In addition to HiPer -D, several Navy Department programs...February 2003, pp. 42-46, at 43. 5Michael W. Masters, Chief Scientist, Advanced Computing Programs, NSWCDD, “ HiPer -D Open Architecture: Advanced

  11. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    Science.gov (United States)

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  12. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    Science.gov (United States)

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  13. New cubic perovskites for one- and two-photon water splitting using the computational materials repository

    DEFF Research Database (Denmark)

    Castelli, Ivano Eligio; Landis, David; Thygesen, Kristian Sommer

    2012-01-01

    screening of around 19 000 oxides, oxynitrides, oxysulfides, oxyfluorides, and oxyfluoronitrides in the cubic perovskite structure with PEC applications in mind. We address three main applications: light absorbers for one- and two-photon water splitting and high-stability transparent shields to protect...

  14. Evolution of the Milieu Approach for Software Development for the Polymorphous Computing Architecture Program

    National Research Council Canada - National Science Library

    Dandass, Yoginder

    2004-01-01

    A key goal of the DARPA Polymorphous Computing Architectures (PCA) program is to develop reactive closed-loop systems that are capable of being dynamically reconfigured in order to respond to changing mission scenarios...

  15. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2016-06-08

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  16. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  17. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A Power Saving Architecture for Web Access from Mobile Computers

    OpenAIRE

    Conti, Marco; Gregori, Enrico; Passarella, Andrea

    2002-01-01

    This work proposes new power-saving strategies for mobile access to the Web. User mobility is a key factor in the evolution of Web services. Unfortunately, the legacy approach for Web access is very inefficient when applied to mobile users. One of the critical issues is the inefficient usage of energetic resources when adopting the legacy TCP/IP architecture for Web access from mobile devices. In this paper we address this problem by proposing a new architecture,namely PS-Web, which works at ...

  19. Architecture of 32 bit CISC (Complex Instruction Set Computer) microprocessors

    International Nuclear Information System (INIS)

    Jove, T.M.; Ayguade, E.; Valero, M.

    1988-01-01

    In this paper we describe the main topics about the architecture of the best known 32-bit CISC microprocessors; i80386, MC68000 family, NS32000 series and Z80000. We focus on the high level languages support, operating system design facilities, memory management, techniques to speed up the overall performance and program debugging facilities. (Author)

  20. An efficient FPGA architecture for integer ƞth root computation

    Science.gov (United States)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  1. Active Sites Intercalated Ultrathin Carbon Sheath on Nanowire Arrays as Integrated Core-Shell Architecture: Highly Efficient and Durable Electrocatalysts for Overall Water Splitting.

    Science.gov (United States)

    Hou, Jungang; Wu, Yunzhen; Cao, Shuyan; Sun, Yiqing; Sun, Licheng

    2017-12-01

    The development of active bifunctional electrocatalysts with low cost and earth-abundance toward oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) remains a great challenge for overall water splitting. Herein, metallic Ni 4 Mo nanoalloys are firstly implanted on the surface of NiMoO x nanowires array (NiMo/NiMoO x ) as metal/metal oxides hybrid. Inspired by the superiority of carbon conductivity, an ultrathin nitrogen-doped carbon sheath intercalated NiMo/NiMoO x (NC/NiMo/NiMoO x ) nanowires as integrated core-shell architecture are constructed. The integrated NC/NiMo/NiMoO x array exhibits an overpotential of 29 mV at 10 mA cm -2 and a low Tafel slope of 46 mV dec -1 for HER due to the abundant active sites, fast electron transport, low charge-transfer resistance, unique architectural structure and synergistic effect of carbon sheath, nanoalloys, and oxides. Moreover, as OER catalysts, the NC/NiMo/NiMoO x hybrids require an overpotential of 284 mV at 10 mA cm -2 . More importantly, the NC/NiMo/NiMoO x array as a highly active and stable electrocatalyst approaches ≈10 mA cm -2 at a voltage of 1.57 V, opening an avenue to the rational design and fabrication of the promising electrode materials with architecture structures toward the electrochemical energy storage and conversion. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  3. Reference Architecture for High Dependability On-Board Computers

    Science.gov (United States)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on- board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on- board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  4. Computer simulation of sphenopsid architecture. I. Principles and methodology.

    Science.gov (United States)

    Daviero; Meyer-Berthaud; Lecoustre

    2000-04-01

    The modelling system AMAP 1 provides morphological models that reproduce the series of shapes developed in a plant structure during its growth. It is applicable to plants that have architectural features consistent with the principles introduced by Hallé et al. (Hallé, F., Oldeman, R.A.A., Tomlinson, P.B., 1978. Tropical Trees and Forest. Springer, Berlin, 441 pp.). We present the main principles of the methodology including the use of an architectural template and the statistical processing of the data collected on sample plants and a description of its components and parameters. We use models of Equisetum telmateia aerial shoots as examples of adaptations of this methodology to plants represented by a limited number of specimens. The main features of this approach that make it especially relevant for modelling incomplete and fragmented fossil plants include the use of architectural templates constructed by adding discrete morphological entities limited to a number of axial components as follows: as many branch orders as are identified in the sample plants, a single extension unit per branch order, and its internodes. This approach is viewed as a means to provide visual representations of plants at different ontogenetical stages, expressing our current knowledge of their growth and branching strategies, and of the parameters that control their geometries.

  5. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    Science.gov (United States)

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  6. A Survey and Evaluation of Simulators Suitable for Teaching Courses in Computer Architecture and Organization

    Science.gov (United States)

    Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.

    2009-01-01

    Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…

  7. Analysis of Introducing Active Learning Methodologies in a Basic Computer Architecture Course

    Science.gov (United States)

    Arbelaitz, Olatz; José I. Martín; Muguerza, Javier

    2015-01-01

    This paper presents an analysis of introducing active methodologies in the Computer Architecture course taught in the second year of the Computer Engineering Bachelor's degree program at the University of the Basque Country (UPV/EHU), Spain. The paper reports the experience from three academic years, 2011-2012, 2012-2013, and 2013-2014, in which…

  8. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  9. Combining discrete equations method and upwind downwind-controlled splitting for non-reacting and reacting two-fluid computations

    International Nuclear Information System (INIS)

    Tang, K.

    2012-01-01

    When numerically investigating multiphase phenomena during severe accidents in a reactor system, characteristic lengths of the multi-fluid zone (non-reactive and reactive) are found to be much smaller than the volume of the reactor containment, which makes the direct modeling of the configuration hardly achievable. Alternatively, we propose to consider the physical multiphase mixture zone as an infinitely thin interface. Then, the reactive Riemann solver is inserted into the Reactive Discrete Equations Method (RDEM) to compute high speed combustion waves represented by discontinuous interfaces. An anti-diffusive approach is also coupled with RDEM to accurately simulate reactive interfaces. Increased robustness and efficiency when computing both multiphase interfaces and reacting flows are achieved thanks to an original upwind downwind-controlled splitting method (UDCS). UDCS is capable of accurately solving interfaces on multi-dimensional unstructured meshes, including reacting fronts for both deflagration and detonation configurations. (author)

  10. Hybrid architecture for encoded measurement-based quantum computation.

    Science.gov (United States)

    Zwerger, M; Briegel, H J; Dür, W

    2014-06-20

    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication.

  11. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  12. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  13. Embryo splitting

    OpenAIRE

    Karl Illmensee; Mike Levanduski

    2010-01-01

    Mammalian embryo splitting has successfully been established in farm animals. Embryo splitting is safely and efficiently used for assisted reproduction in several livestock species. In the mouse, efficient embryo splitting as well as single blastomere cloning have been developed in this animal system. In nonhuman primates embryo splitting has resulted in several pregnancies. Human embryo splitting has been reported recently. Microsurgical embryo splitting under Institutional Review Board appr...

  14. Web Service Architecture for Computer-Adaptive Testing on e-Learning

    OpenAIRE

    M. Phankokkruad; K. Woraratpanya

    2008-01-01

    This paper proposes a Web service and serviceoriented architecture (SOA) for a computer-adaptive testing (CAT) process on e-learning systems. The proposed architecture is developed to solve an interoperability problem of the CAT process by using Web service. The proposed SOA and Web service define all services needed for the interactions between systems in order to deliver items and essential data from Web service to the CAT Webbased application. These services are implem...

  15. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    Science.gov (United States)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  16. Leveraging software architectures to guide and verify the development of sense/compute/control applications

    DEFF Research Database (Denmark)

    Cassou, Damien; Balland, Emilie; Consel, Charles

    2011-01-01

    A software architecture describes the structure of a computing system by specifying software components and their interactions. Mapping a software architecture to an implementation is a well known challenge. A key element of this mapping is the architecture’s description of the data and control...... interaction specifications. We introduce a notion of interaction contract that expresses allowed interactions between components, describing both data and control-flow constraints. This declaration is part of the architecture description, allows generation of extensive programming support, and enables various...

  17. A SECURE MESSAGE TRANSMISSION SYSTEM ARCHITECTURE FOR COMPUTER NETWORKS EMPLOYING SMART CARDS

    Directory of Open Access Journals (Sweden)

    Geylani KARDAŞ

    2008-01-01

    Full Text Available In this study, we introduce a mobile system architecture which employs smart cards for secure message transmission in computer networks. The use of smart card provides two security services as authentication and confidentiality in our design. The security of the system is provided by asymmetric encryption. Hence, smart cards are used to store personal account information as well as private key of each user for encryption / decryption operations. This offers further security, authentication and mobility to the system architecture. A real implementation of the proposed architecture which utilizes the JavaCard technology is also discussed in this study.

  18. Gate errors in solid-state quantum-computer architectures

    International Nuclear Information System (INIS)

    Hu Xuedong; Das Sarma, S.

    2002-01-01

    We theoretically consider possible errors in solid-state quantum computation due to the interplay of the complex solid-state environment and gate imperfections. In particular, we study two examples of gate operations in the opposite ends of the gate speed spectrum, an adiabatic gate operation in electron-spin-based quantum dot quantum computation and a sudden gate operation in Cooper-pair-box superconducting quantum computation. We evaluate quantitatively the nonadiabatic operation of a two-qubit gate in a two-electron double quantum dot. We also analyze the nonsudden pulse gate in a Cooper-pair-box-based quantum-computer model. In both cases our numerical results show strong influences of the higher excited states of the system on the gate operation, clearly demonstrating the importance of a detailed understanding of the relevant Hilbert-space structure on the quantum-computer operations

  19. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  20. Embryo splitting

    Directory of Open Access Journals (Sweden)

    Karl Illmensee

    2010-04-01

    Full Text Available Mammalian embryo splitting has successfully been established in farm animals. Embryo splitting is safely and efficiently used for assisted reproduction in several livestock species. In the mouse, efficient embryo splitting as well as single blastomere cloning have been developed in this animal system. In nonhuman primates embryo splitting has resulted in several pregnancies. Human embryo splitting has been reported recently. Microsurgical embryo splitting under Institutional Review Board approval has been carried out to determine its efficiency for blastocyst development. Embryo splitting at the 6–8 cell stage provided a much higher developmental efficiency compared to splitting at the 2–5 cell stage. Embryo splitting may be advantageous for providing additional embryos to be cryopreserved and for patients with low response to hormonal stimulation in assisted reproduction programs. Social and ethical issues concerning embryo splitting are included regarding ethics committee guidelines. Prognostic perspectives are presented for human embryo splitting in reproductive medicine.

  1. Morphable Computer Architectures for Highly Energy Aware Systems

    National Research Council Canada - National Science Library

    Kogge, Peter

    2004-01-01

    To achieve a revolutionary reduction in overall power consumption, computing systems must be constructed out of both inherently low-power structures and power-aware or energy-aware hardware and software subsystems...

  2. A Distributed Agent Architecture for a Computer Virus Immune System

    National Research Council Canada - National Science Library

    Harmer, Paul

    2000-01-01

    .... Information protection and information assurance are vital components required for achieving superiority in the Infosphere, but these goals are threatened by the exponential birth rate of new computer viruses...

  3. Implicit Unstructured Computational Aerodynamics on Many-Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2014-05-04

    This research aims to understand the performance of PETSc-FUN3D, a fully nonlinear implicit unstructured grid incompressible or compressible Euler code with origins at NASA and the U.S. DOE, on many-integrated core architecture and how a hybridprogramming paradigm (MPI+OpenMP) can exploit Intel Xeon Phi hardware with upwards of 60 cores per node and 4 threads per core. For the current contribution, we focus on strong scaling with many-integrated core hardware. In most implicit PDE-based codes, while the linear algebraic kernel is limited by the bottleneck of memory bandwidth, the flux kernel arising in control volume discretization of the conservation law residuals and the preconditioner for the Jacobian exploits the Phi hardware well.

  4. Interfacial Engineering of Nanoporous Architectures in Ga2O3 Film toward Self-Aligned Tubular Nanostructure with an Enhanced Photocatalytic Activity on Water Splitting.

    Science.gov (United States)

    Shrestha, Nabeen K; Bui, Hoa Thi; Lee, Taegweon; Noh, Yong-Young

    2018-04-17

    The present work demonstrates the formation of self-aligned nanoporous architecture of gallium oxide by anodization of gallium metal film controlled at -15 °C in aqueous electrolyte consisting of phosphoric acid. SEM examination of the anodized film reveals that by adding ethylene glycol to the electrolyte and optimizing the ratio of phosphoric acid and water, chemical etching at the oxide/electrolyte interfaces can be controlled, leading to the formation of aligned nanotubular oxide structures with closed bottom. XPS analysis confirms the chemical composition of the oxide film as Ga 2 O 3 . Further, XRD and SAED examination reveals that the as-synthesized nanotubular structure is amorphous, and can be crystallized to β-Ga 2 O 3 phase by annealing the film at 600 °C. The nanotubular structured film, when used as photoanode for photoelectrochemical splitting of water, achieved a higher photocurrent of about two folds than that of the nanoporous film, demonstrating the rewarding effect of the nanotubular structure. In addition, the work also demonstrates the formation of highly organized nonporous Ga 2 O 3 structure on a nonconducting glass substrate coated with thin film of Ga-metal, highlighting that the current approach can be extended for the formation of self-organized nanoporous Ga 2 O 3 thin film even on nonconducting flexible substrates.

  5. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    Science.gov (United States)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  6. Laboratory Works Designed for Developing Student Motivation in Computer Architecture

    Directory of Open Access Journals (Sweden)

    Petre Ogrutan

    2017-02-01

    Full Text Available In light of the current difficulties related to maintaining the students’ interest and to stimulate their motivation for learning, the authors have developed a range of new laboratory exercises intended for first-year students in Computer Science as well as for engineering students after completion of at least one course in computers. The educational goal of the herein proposed laboratory exercises is to enhance the students’ motivation and creative thinking by organizing a relaxed yet competitive learning environment. The authors have developed a device including LEDs and switches, which is connected to a computer. By using assembly language, commands can be issued to flash several LEDs and read the states of the switches. The effectiveness of this idea was confirmed by a statistical study.

  7. Final Report: Super Instruction Architecture for Scalable Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, Beverly Ann [Univ. of Florida, Gainesville, FL (United States)

    2013-12-02

    The most advanced methods for reliable and accurate computation of the electronic structure of molecular and nano systems are the coupled-cluster techniques. These high-accuracy methods help us to understand, for example, how biological enzymes operate and contribute to the design of new organic explosives. The ACES III software provides a modern, high-performance implementation of these methods optimized for high performance parallel computer systems, ranging from small clusters typical in individual research groups, through larger clusters available in campus and regional computer centers, all the way to high-end petascale systems at national labs, including exploiting GPUs if available. This project enhanced the ACESIII software package and used it to study interesting scientific problems.

  8. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    Science.gov (United States)

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  9. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  10. Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment

    Science.gov (United States)

    Duong, Tuan A.; Duong, Vu A.

    2012-01-01

    A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.

  11. A spatially localized architecture for fast and modular DNA computing

    Science.gov (United States)

    Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg

    2017-09-01

    Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.

  12. Characterization of the MCNPX computer code in micro processed architectures

    International Nuclear Information System (INIS)

    Almeida, Helder C.; Dominguez, Dany S.; Orellana, Esbel T.V.; Milian, Felix M.

    2009-01-01

    The MCNPX (Monte Carlo N-Particle extended) can be used to simulate the transport of several types of nuclear particles, using probabilistic methods. The technique used for MCNPX is to follow the history of each particle from its origin to its extinction that can be given by absorption, escape or other reasons. To obtain accurate results in simulations performed with the MCNPX is necessary to process a large number of histories, which demand high computational cost. Currently the MCNPX can be installed in virtually all computing platforms available, however there is virtually no information on the performance of the application in each. This paper studies the performance of MCNPX, to work with electrons and photons in phantom Faux on two platforms used by most researchers, Windows and Li nux. Both platforms were tested on the same computer to ensure the reliability of the hardware in the measures of performance. The performance of MCNPX was measured by time spent to run a simulation, making the variable time the main measure of comparison. During the tests the difference in performance between the two platforms MCNPX was evident. In some cases we were able to gain speed more than 10% only with the exchange platforms, without any specific optimization. This shows the relevance of the study to optimize this tool on the platform most appropriate for its use. (author)

  13. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  14. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    International Nuclear Information System (INIS)

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole

  15. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole.

  16. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  17. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  18. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Science.gov (United States)

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  19. Computer Security Primer: Systems Architecture, Special Ontology and Cloud Virtual Machines

    Science.gov (United States)

    Waguespack, Leslie J.

    2014-01-01

    With the increasing proliferation of multitasking and Internet-connected devices, security has reemerged as a fundamental design concern in information systems. The shift of IS curricula toward a largely organizational perspective of security leaves little room for focus on its foundation in systems architecture, the computational underpinnings of…

  20. p88110: A Graphical Simulator for Computer Architecture and Organization Courses

    Science.gov (United States)

    Garcia, M. I.; Rodriguez, S.; Perez, A.; Garcia, A.

    2009-01-01

    Studying fundamental Computer Architecture and Organization topics requires a significant amount of practical work if students are to acquire a good grasp of the theoretical concepts presented in classroom lectures or textbooks. The use of simulators is commonly adopted in order to reach this objective. However, as most of the available…

  1. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  2. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    Science.gov (United States)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced

  3. MAINS: MULTI-AGENT INTELLIGENT SERVICE ARCHITECTURE FOR CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    T. Joshva Devadas

    2014-04-01

    Full Text Available Computing has been transformed to a model having commoditized services. These services are modeled similar to the utility services water and electricity. The Internet has been stunningly successful over the course of past three decades in supporting multitude of distributed applications and a wide variety of network technologies. However, its popularity has become the biggest impediment to its further growth with the handheld devices mobile and laptops. Agents are intelligent software system that works on behalf of others. Agents are incorporated in many innovative applications in order to improve the performance of the system. Agent uses its possessed knowledge to react with the system and helps to improve the performance. Agents are introduced in the cloud computing is to minimize the response time when similar request is raised from an end user in the globe. In this paper, we have introduced a Multi Agent Intelligent system (MAINS prior to cloud service models and it was tested using sample dataset. Performance of the MAINS layer was analyzed in three aspects and the outcome of the analysis proves that MAINS Layer provides a flexible model to create cloud applications and deploying them in variety of applications.

  4. Novel photonic bandgap based architectures for quantum computers and networks

    Science.gov (United States)

    Guney, Durdu

    All of the approaches for quantum information processing have their own advantages, but unfortunately also their own drawbacks. Ideally, one would merge the most attractive features of those different approaches in a single technology. We envision that large-scale photonic crystal (PC) integrated circuits and fibers could be the basis for robust and compact quantum circuits and processors of the next generation quantum computers and networking devices. Cavity QED, solid-state, and (non)linear optical models for computing, and optical fiber approach for communications are the most promising candidates to be improved through this novel technology. In our work, we consider both digital and analog quantum computing. In the digital domain, we first perform gate-level analysis. To achieve this task, we solve the Jaynes-Cummings Hamiltonian with time-dependent coupling parameters under the dipole and rotating-wave approximations for a 3D PC single-mode cavity with a sufficiently high Q-factor. We then exploit the results to show how to create a maximally entangled state of two atoms and how to implement several quantum logic gates: a dual-rail Hadamard gate, a dual-rail NOT gate, and a SWAP gate. In all of these operations, we synchronize atoms, as opposed to previous studies with PCs. The method has the potential for extension to N-atom entanglement, universal quantum logic operations, and the implementation of other useful, cavity QED-based quantum information processing tasks. In the next part of the digital domain, we study circuit-level implementations. We design and simulate an integrated teleportation and readout circuit on a single PC chip. The readout part of our device can not only be used on its own but can also be integrated with other compatible optical circuits to achieve atomic state detection. Further improvement of the device in terms of compactness and robustness is possible by integrating with sources and detectors in the optical regime. In the analog

  5. Performance prediction of finite-difference solvers for different computer architectures

    Science.gov (United States)

    Louboutin, Mathias; Lange, Michael; Herrmann, Felix J.; Kukreja, Navjot; Gorman, Gerard

    2017-08-01

    The life-cycle of a partial differential equation (PDE) solver is often characterized by three development phases: the development of a stable numerical discretization; development of a correct (verified) implementation; and the optimization of the implementation for different computer architectures. Often it is only after significant time and effort has been invested that the performance bottlenecks of a PDE solver are fully understood, and the precise details varies between different computer architectures. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop. We show how discretization of a wave-equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures. We focus on the roofline model, now broadly used in the high-performance computing community, which considers the achievable performance in terms of the peak memory bandwidth and peak floating point performance of a computer with respect to algorithmic choices. A first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented. With this information available at the time of algorithm design, the expected performance on target computer systems can be used as a driver for algorithm design.

  6. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    Science.gov (United States)

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  7. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  8. Design of command, data and telemetry handling system for a distributed computing architecture CubeSat

    Science.gov (United States)

    Asundi, S. A.; Fitz-Coy, N. G.

    Among the size, weight and power constraints imposed by the CubeSat specification, the limitation associated with power can be addressed through a distributed computing architecture. This paper describes such a distributed computing architecture and its operational design in the form of command and data handling system and telemetry formulation, adapted for a CubeSat whose power requirements for proving the mission are significantly larger than the on-orbit average power generated. The 1U CubeSat with the mission objective of precision three axes attitude control is composed of a low power flight computer and a high power, high speed auxiliary processor (CMG controller), along with a high capacity battery. The precision sensors, actuators and complex computing algorithms, are interfaced and implemented on the high speed auxiliary processor, which is operated intermittently. Health monitoring sensors, transceiver and other housekeeping tasks are interfaced and implemented on the flight computer, which is in continuous operation. To facilitate effective operation and telemetry packaging, each computing unit is designed to host a storage device. The flight software, designed as operating modes, is distributed across the two computing platforms. Distributed operations are initiated through the flight computer and executed on the auxiliary processor. The paper describes in detail the distributed design of these operating modes as flowcharts and the associated telemetry budget as tables.

  9. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    Science.gov (United States)

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  10. A cerebellar neuroprosthetic system: computational architecture and in vivo experiments

    Directory of Open Access Journals (Sweden)

    Ivan eHerreros Alonso

    2014-05-01

    Full Text Available Emulating the input-output functions performed by a brain structure opens the possibility for developing neuro-prosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model's inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuro-prosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step towards replacing lost functions of the central nervous system via neuro-prosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuro-prosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step towards the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term

  11. A Cerebellar Neuroprosthetic System: Computational Architecture and in vivo Test

    International Nuclear Information System (INIS)

    Herreros, Ivan; Giovannucci, Andrea; Taub, Aryeh H.; Hogri, Roni; Magal, Ari; Bamford, Sim; Prueckl, Robert; Verschure, Paul F. M. J.

    2014-01-01

    Emulating the input–output functions performed by a brain structure opens the possibility for developing neuroprosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention, and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model’s inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuroprosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step toward replacing lost functions of the central nervous system via neuroprosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuroprosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step toward the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term, humans.

  12. A Cerebellar Neuroprosthetic System: Computational Architecture and in vivo Test

    Energy Technology Data Exchange (ETDEWEB)

    Herreros, Ivan; Giovannucci, Andrea [Synthetic Perceptive, Emotive and Cognitive Systems group (SPECS), Universitat Pompeu Fabra, Barcelona (Spain); Taub, Aryeh H.; Hogri, Roni; Magal, Ari [Psychobiology Research Unit, Tel Aviv University, Tel Aviv (Israel); Bamford, Sim [Physics Laboratory, Istituto Superiore di Sanità, Rome (Italy); Prueckl, Robert [Guger Technologies OG, Graz (Austria); Verschure, Paul F. M. J., E-mail: paul.verschure@upf.edu [Synthetic Perceptive, Emotive and Cognitive Systems group (SPECS), Universitat Pompeu Fabra, Barcelona (Spain); Institució Catalana de Recerca i Estudis Avançats, Barcelona (Spain)

    2014-05-21

    Emulating the input–output functions performed by a brain structure opens the possibility for developing neuroprosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention, and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model’s inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuroprosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step toward replacing lost functions of the central nervous system via neuroprosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuroprosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step toward the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term, humans.

  13. Performance evaluation for compressible flow calculations on five parallel computers of different architectures

    International Nuclear Information System (INIS)

    Kimura, Toshiya.

    1997-03-01

    A two-dimensional explicit Euler solver has been implemented for five MIMD parallel computers of different machine architectures in Center for Promotion of Computational Science and Engineering of Japan Atomic Energy Research Institute. These parallel computers are Fujitsu VPP300, NEC SX-4, CRAY T94, IBM SP2, and Hitachi SR2201. The code was parallelized by several parallelization methods, and a typical compressible flow problem has been calculated for different grid sizes changing the number of processors. Their effective performances for parallel calculations, such as calculation speed, speed-up ratio and parallel efficiency, have been investigated and evaluated. The communication time among processors has been also measured and evaluated. As a result, the differences on the performance and the characteristics between vector-parallel and scalar-parallel computers can be pointed, and it will present the basic data for efficient use of parallel computers and for large scale CFD simulations on parallel computers. (author)

  14. Integrating Computer Architectures into the Design of High-Performance Controllers

    Science.gov (United States)

    Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William

    1986-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.

  15. Micromechanisms of damage in 0 deg. splits in a [90/0]{sub s} composite material using synchrotron radiation computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Moffat, A.J. [School of Engineering Sciences, University of Southampton, University Road, Southampton, Hants SO17 1BJ (United Kingdom)], E-mail: ajmoffat@soton.ac.uk; Wright, P. [School of Engineering Sciences, University of Southampton, University Road, Southampton, Hants SO17 1BJ (United Kingdom); Buffiere, J.-Y. [MATEIS, INSA de Lyon, Universite de Lyon (France); Sinclair, I.; Spearing, S.M. [School of Engineering Sciences, University of Southampton, University Road, Southampton, Hants SO17 1BJ (United Kingdom)

    2008-11-15

    In situ synchrotron radiation computed tomography has been used to investigate 0 deg. ply splits in a [90/0]{sub s} carbon fibre-epoxy laminate. This technique allows for direct three-dimensional observations of damage. Micromechanisms such as pinning and bridging have been observed in rubber-toughened, resin-rich regions. Crack opening and shear displacements associated with 0 deg. splits have been quantified and this work demonstrates that this technique may be particularly useful for determining full-field strain maps around damage in composite materials.

  16. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    Science.gov (United States)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  17. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  18. A COMPUTER APPLICATION FOR THE ARCHITECTURAL PROGRAM DEVELOPMENT IN DESIGN EDUCATION

    Directory of Open Access Journals (Sweden)

    Daniel de Carvalho Moreira

    2012-02-01

    Full Text Available The development of the architectural program in the design studio faces several difficulties. The purpose of the program is to describe the conditions where the building being designed will operate; this requires a lot of information and organization. Due to its complexity, the architetural program definition in the disciplines of design is often simplified. This article discusses such issue and proposes a computer application (SINFORMA that gathers information about the building and the theme of the project in order to develop the architectural program based on structures proposed by bibliographic references. The SINFORMA is composed by a framework which includes a data base and modules which analyze and organize functional requirements, according to the Problem Seeking method and the contemporary values of architecture enumerated by Hershberger. It is discussed how the application can be applied in design education and how it offers students a practical approach and a comprehensive data analysis for the design of built environment. Keywords: Architectural programming, Architectural design, Education.

  19. Design and Analysis of a Neuromemristive Reservoir Computing Architecture for Biosignal Processing.

    Science.gov (United States)

    Kudithipudi, Dhireesha; Saleh, Qutaiba; Merkel, Cory; Thesing, James; Wysocki, Bryant

    2015-01-01

    Reservoir computing (RC) is gaining traction in several signal processing domains, owing to its non-linear stateful computation, spatiotemporal encoding, and reduced training complexity over recurrent neural networks (RNNs). Previous studies have shown the effectiveness of software-based RCs for a wide spectrum of applications. A parallel body of work indicates that realizing RNN architectures using custom integrated circuits and reconfigurable hardware platforms yields significant improvements in power and latency. In this research, we propose a neuromemristive RC architecture, with doubly twisted toroidal structure, that is validated for biosignal processing applications. We exploit the device mismatch to implement the random weight distributions within the reservoir and propose mixed-signal subthreshold circuits for energy efficiency. A comprehensive analysis is performed to compare the efficiency of the neuromemristive RC architecture in both digital(reconfigurable) and subthreshold mixed-signal realizations. Both Electroencephalogram (EEG) and Electromyogram (EMG) biosignal benchmarks are used for validating the RC designs. The proposed RC architecture demonstrated an accuracy of 90 and 84% for epileptic seizure detection and EMG prosthetic finger control, respectively.

  20. Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Kaushik; Murphy, Mark; Volkov, Vasily; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Patterson, David; Shalf, John; Yelick, Katherine

    2008-08-22

    Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations -- a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural trade-offs of emerging multicore designs and their implications on scientific algorithm development.

  1. Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Datta, K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Murphy, M. [Univ. of California, Berkeley, CA (United States); Volkov, V. [Univ. of California, Berkeley, CA (United States); Williams, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Carter, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Patterson, D. A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Shalf, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, K. A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2008-11-21

    Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations — a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural tradeoffs of emerging multicore designs and their implications on scientific algorithm development.

  2. The research of contamination regularities of historical buildings and architectural monuments by methods of computer modeling

    Directory of Open Access Journals (Sweden)

    Kuzmichev Andrey A.

    2017-01-01

    Full Text Available Due to the active step of urbanization and rapid development of industry the external appearance of buildings and architectural monuments of urban environment from visual ecology position requires special attention. Dust deposition by polluted atmospheric air is one of the key aspects of degradation of the facades of buildings. With the help of modern computer modeling methods it is possible to evaluate the impact of polluted atmospheric air on the external facades of the buildings in order to save them.

  3. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  4. A Conceptual Architecture for Adaptive Human-Computer Interface of a PT Operation Platform Based on Context-Awareness

    Directory of Open Access Journals (Sweden)

    Qing Xue

    2014-01-01

    Full Text Available We present a conceptual architecture for adaptive human-computer interface of a PT operation platform based on context-awareness. This architecture will form the basis of design for such an interface. This paper describes components, key technologies, and working principles of the architecture. The critical contents covered context information modeling, processing, relationship establishing between contexts and interface design knowledge by use of adaptive knowledge reasoning, and visualization implementing of adaptive interface with the aid of interface tools technology.

  5. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  6. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  7. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    Science.gov (United States)

    Klonoff, David C

    2017-07-01

    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  8. Molecular architectures based on π-conjugated block copolymers for global quantum computation

    International Nuclear Information System (INIS)

    Mujica Martinez, C A; Arce, J C; Reina, J H; Thorwart, M

    2009-01-01

    We propose a molecular setup for the physical implementation of a barrier global quantum computation scheme based on the electron-doped π-conjugated copolymer architecture of nine blocks PPP-PDA-PPP-PA-(CCH-acene)-PA-PPP-PDA-PPP (where each block is an oligomer). The physical carriers of information are electrons coupled through the Coulomb interaction, and the building block of the computing architecture is composed by three adjacent qubit systems in a quasi-linear arrangement, each of them allowing qubit storage, but with the central qubit exhibiting a third accessible state of electronic energy far away from that of the qubits' transition energy. The third state is reached from one of the computational states by means of an on-resonance coherent laser field, and acts as a barrier mechanism for the direct control of qubit entanglement. Initial estimations of the spontaneous emission decay rates associated to the energy level structure allow us to compute a damping rate of order 10 -7 s, which suggest a not so strong coupling to the environment. Our results offer an all-optical, scalable, proposal for global quantum computing based on semiconducting π-conjugated polymers.

  9. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    Science.gov (United States)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology

  10. VLSI architecture

    Energy Technology Data Exchange (ETDEWEB)

    Randell, B.; Treleaven, P.C.

    1983-01-01

    This book is a collection of course papers which discusses the latest (1982) milestone of electronic building blocks and its effect on computer architecture. Contributions range from selecting a VLSI process technology to Japan's Fifth Generation Computer Architecture. Contents, abridged: VLSI and machine architecture. Graphic design aids: HED and FATFREDDY. On the LUCIFER system. Clocking of VLSI circuits. Decentralised computer architectures for VLSI. Index.

  11. How computer science can help in understanding the 3D genome architecture.

    Science.gov (United States)

    Shavit, Yoli; Merelli, Ivan; Milanesi, Luciano; Lio', Pietro

    2016-09-01

    Chromosome conformation capture techniques are producing a huge amount of data about the architecture of our genome. These data can provide us with a better understanding of the events that induce critical regulations of the cellular function from small changes in the three-dimensional genome architecture. Generating a unified view of spatial, temporal, genetic and epigenetic properties poses various challenges of data analysis, visualization, integration and mining, as well as of high performance computing and big data management. Here, we describe the critical issues of this new branch of bioinformatics, oriented at the comprehension of the three-dimensional genome architecture, which we call 'Nucleome Bioinformatics', looking beyond the currently available tools and methods, and highlight yet unaddressed challenges and the potential approaches that could be applied for tackling them. Our review provides a map for researchers interested in using computer science for studying 'Nucleome Bioinformatics', to achieve a better understanding of the biological processes that occur inside the nucleus. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. An FPGA-Based Quantum Computing Emulation Framework Based on Serial-Parallel Architecture

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2016-01-01

    Full Text Available Hardware emulation of quantum systems can mimic more efficiently the parallel behaviour of quantum computations, thus allowing higher processing speed-up than software simulations. In this paper, an efficient hardware emulation method that employs a serial-parallel hardware architecture targeted for field programmable gate array (FPGA is proposed. Quantum Fourier transform and Grover’s search are chosen as case studies in this work since they are the core of many useful quantum algorithms. Experimental work shows that, with the proposed emulation architecture, a linear reduction in resource utilization is attained against the pipeline implementations proposed in prior works. The proposed work contributes to the formulation of a proof-of-concept baseline FPGA emulation framework with optimization on datapath designs that can be extended to emulate practical large-scale quantum circuits.

  13. Bio-signal analysis system design with support vector machines based on cloud computing service architecture.

    Science.gov (United States)

    Shen, Chia-Ping; Chen, Wei-Hsin; Chen, Jia-Ming; Hsu, Kai-Ping; Lin, Jeng-Wei; Chiu, Ming-Jang; Chen, Chi-Huang; Lai, Feipei

    2010-01-01

    Today, many bio-signals such as Electroencephalography (EEG) are recorded in digital format. It is an emerging research area of analyzing these digital bio-signals to extract useful health information in biomedical engineering. In this paper, a bio-signal analyzing cloud computing architecture, called BACCA, is proposed. The system has been designed with the purpose of seamless integration into the National Taiwan University Health Information System. Based on the concept of. NET Service Oriented Architecture, the system integrates heterogeneous platforms, protocols, as well as applications. In this system, we add modern analytic functions such as approximated entropy and adaptive support vector machine (SVM). It is shown that the overall accuracy of EEG bio-signal analysis has increased to nearly 98% for different data sets, including open-source and clinical data sets.

  14. Emerging opportunities in enterprise integration with open architecture computer numerical controls

    Science.gov (United States)

    Hudson, Christopher A.

    1997-01-01

    The shift to open-architecture machine tool computer numerical controls is providing new opportunities for metal working oriented manufacturers to streamline the entire 'art to part' process. Production cycle times, accuracy, consistency, predictability and process reliability are just some of the factors that can be improved, leading to better manufactured product at lower costs. Open architecture controllers are allowing manufacturers to apply general purpose software and hardware tools increase where previous approaches relied on proprietary and unique hardware and software. This includes DNC, SCADA, CAD, and CAM, where the increasing use of general purpose components is leading to lower cost system that are also more reliable and robust than the past proprietary approaches. In addition, a number of new opportunities exist, which in the past were likely impractical due to cost or performance constraints.

  15. Intersecting Knowledge Fields and Integrating Data-Driven Computational Design en Route to Performance-Oriented and Intensely Local Architectures

    Directory of Open Access Journals (Sweden)

    Michael U Hensel

    2014-11-01

    Full Text Available This paper discusses research by design efforts in architectural education, focused on developing concepts and methods for the design of performance-oriented and intensely local architectures. The pursued notion of performance foregrounds the interaction between a given architecture and its local setting, with consequences not only for the design product but also for the related processes by which it is generated. Integrated approaches to data-driven computational design serve to generate such designs. The outlined approach shifts the focus of design attention away from the delivery of finite architectural objects and towards an expanded range of architecture-environment interactions that are registered, instrumentalised and modulated over time. This paper examines ongoing efforts in integrating specific architectural goals and approaches, computational data-driven design methods and generative design processes, based on a range of context-specific and often real-time data sets. The work discussed is produced in the context of the Research Centre for Architecture and Tectonics (RCAT and the Advanced Computational Design Laboratory (ACDL at the Oslo School of Architecture and Design.

  16. Developing a New Framework for Integration and Teaching of Computer Aided Architectural Design (CAAD) in Nigerian Schools of Architecture

    Science.gov (United States)

    Uwakonye, Obioha; Alagbe, Oluwole; Oluwatayo, Adedapo; Alagbe, Taiye; Alalade, Gbenga

    2015-01-01

    As a result of globalization of digital technology, intellectual discourse on what constitutes the basic body of architectural knowledge to be imparted to future professionals has been on the increase. This digital revolution has brought to the fore the need to review the already overloaded architectural education curriculum of Nigerian schools of…

  17. An Architecture Independent Approach to Emulating Computation Intensive Workload for Early Integration Testing of Enterprise DRE Systems

    Science.gov (United States)

    Hill, James H.

    Enterprise distributed real-time and embedded (DRE) systems are increasingly using high-performance computing architectures, such as dual-core architectures, multi-core architectures, and parallel computing architectures, to achieve optimal performance. Performing system integration tests on such architectures in realistic operating environments during early phases of the software lifecycle, i.e., before complete system integration time, is becoming more critical. This helps distributed system developers and testers evaluate and locate potential performance bottlenecks before they become too costly to locate and rectify. Traditional approaches either (1) rely heavility on simulation techiques or (2) are too low-level and fall outside the domain knowledge distributed system developers and testers. Consequently, it is hard for distributed system developers and testers to produce realistic operating conditions for early integration testing of such systems.

  18. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    Science.gov (United States)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  19. Spin Ensembles Coupled to Superconducting Resonators: A Scalable Architecture for Solid-State Quantum Computing

    International Nuclear Information System (INIS)

    Chen Chang-Yong; Li Shao-Hua; Hou Qi-Zhe

    2014-01-01

    A design is proposed for scalable solid-state quantum computing, which is based on collectively enhanced magnetic coupling between nitrogen-vacancy center ensembles and superconducting transmission line resonators interconnected by current-biased Josephson junction superconducting phase qubit. In this hybrid system, we realize distant multi-qubit controlled phase gate operations and generate distant multi-qubit entangled W-like states, being indispensable resource to quantum computation. Our proposed architecture consists of solid-state spin ensembles and circuit QED, and could achieve quantum computing in a solid-state environment with high-fidelity and scalable way. The experimental feasibility is discussed, and the implementation efficiency is demonstrated numerically. (general)

  20. Apparatuses and Methods for Producing Runtime Architectures of Computer Program Modules

    Science.gov (United States)

    Abi-Antoun, Marwan Elia (Inventor); Aldrich, Jonathan Erik (Inventor)

    2013-01-01

    Apparatuses and methods for producing run-time architectures of computer program modules. One embodiment includes creating an abstract graph from the computer program module and from containment information corresponding to the computer program module, wherein the abstract graph has nodes including types and objects, and wherein the abstract graph relates an object to a type, and wherein for a specific object the abstract graph relates the specific object to a type containing the specific object; and creating a runtime graph from the abstract graph, wherein the runtime graph is a representation of the true runtime object graph, wherein the runtime graph represents containment information such that, for a specific object, the runtime graph relates the specific object to another object that contains the specific object.

  1. A Unified Computational Architecture for Preprocessing Visual Information in Space and Time.

    Science.gov (United States)

    Skrzypek, Josef

    1986-06-01

    The success of autonomous mobile robots depends on the ability to understand continuously changing scenery. Present techniques for analysis of images are not always suitable because in sequential paradigm, computation of visual functions based on absolute values of stimuli is inefficient. Important aspects of visual information are encoded in discontinuities of intensity, hence a representation in terms of relative values seems advantageous. We present the computing architecture of a massively parallel vision module which optimizes the detection of relative intensity changes in space and time. Visual information must remain constant despite variation in ambient light level or velocity of target and robot. Constancy can be achieved by normalizing motion and lightness scales. In both cases, basic computation involves a comparison of the center pixels with the context of surrounding values. Therefore, a similar computing architecture, composed of three functionally-different and hierarchically-arranged layers of overlapping operators, can be used for two integrated parts of the module. The first part maintains high sensitivity to spatial changes by reducing noise and normalizing the lightness scale. The result is used by the second part to maintain high sensitivity to temporal discontinuities and to compute relative motion information. Simulation results show that response of the module is proportional to contrast of the stimulus and remains constant over the whole domain of intensity. It is also proportional to velocity of motion limited to any small portion of the visual field. Uniform motion throughout the visual field results in constant response, independent of velocity. Spatial and temporal intensity changes are enhanced because computationally, the module resembles the behavior of a DOG function.

  2. Global Locator, Local Locator, and Identifier Split (GLI-Split

    Directory of Open Access Journals (Sweden)

    Michael Menth

    2013-03-01

    Full Text Available The locator/identifier split is an approach for a new addressing and routing architecture to make routing in the core of the Internet more scalable. Based on this principle, we developed the GLI-Split framework, which separates the functionality of current IP addresses into a stable identifier and two independent locators, one for routing in the Internet core and one for edge networks. This makes routing in the Internet more stable and provides more flexibility for edge networks. GLI-Split can be incrementally deployed and it is backward-compatible with the IPv6 Internet. We describe its architecture, compare it to other approaches, present its benefits, and finally present a proof-of-concept implementation of GLI-Split.

  3. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  4. The Activity-Based Computing Project - A Software Architecture for Pervasive Computing Final Report

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    This report describes the results of the Activity-Based Computing (ABC) project granted by the Danish Strategic Re- search Council, grant no. #2106-04-0019. In summary, we conclude that the ABC project has been highly successful. Not only has it meet all of its objectives and expected results....... Special attention should be drawn to publication [25], which gives an overview of the ABC project to the IEEE Pervasive Computing community; the ACM CHI 2006 [19] paper that documents the implementation of the ABC technology; and the ACM ToCHI paper [12], which is the main publication of the project......, documenting all of the project’s four objectives. All of these publication venues are top-tier journals and conferences within computer science. From a business perspective, the project had the objective of incorporating relevant parts of the ABC technology into the products of Medical Insight, which has been...

  5. Hybrid Cloud Computing Architecture Optimization by Total Cost of Ownership Criterion

    Directory of Open Access Journals (Sweden)

    Elena Valeryevna Makarenko

    2014-12-01

    Full Text Available Achieving the goals of information security is a key factor in the decision to outsource information technology and, in particular, to decide on the migration of organizational data, applications, and other resources to the infrastructure, based on cloud computing. And the key issue in the selection of optimal architecture and the subsequent migration of business applications and data to the cloud organization information environment is the question of the total cost of ownership of IT infrastructure. This paper focuses on solving the problem of minimizing the total cost of ownership cloud.

  6. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  7. Proceedings of the International Workshop on High-Level Language Computer Architecture, May 26-28, 1980, Fort Lauderdale, Florida

    Science.gov (United States)

    1980-06-01

    of Computer Sciknco College Park, Mdry~and 20742 ,4 A SWM1 INliU WIZM2Dam~e ~ Department of Computer Scienci University of Maryland College Park...National Scienci : Foundatk~a under puant GJ33097X and by the towst sortable to do ?vlth the SYMBOL architecture. loegmiptots and type State University

  8. Computational Analysis to Factor Wind into the Design of an Architectural Environment

    Directory of Open Access Journals (Sweden)

    Hassam Nasarullah Chaudhry

    2015-01-01

    Full Text Available The effect of wind distribution on the architectural domain of the Bahrain Trade Centre was numerically analysed using computational fluid dynamics (CFD. Using the numerical data, the power generation potential of the building-integrated wind turbines was determined in response to the prevailing wind direction. The three-dimensional Reynolds-averaged Navier-Stokes (RANS equations along with the momentum and continuity equations were solved for obtaining the velocity and pressure field. Simulating a reference wind speed of 6 m/s, the findings from the study quantified an estimate power generation of 6.4 kW indicating a capacity factor of 2.9% for the benchmark model. At the windward side of the building, it was observed that the layers of turbulence intensified in inverse proportion to the height of the building with an average value of 0.45 J/kg. The air velocity was found to gradually increase in direct proportion to the elevation with the turbine located at higher altitude receiving maximum exposure to incoming wind. This work highlighted the potential of using advanced computational fluid dynamics in order to factor wind into the design of any architectural environment.

  9. Peer-to-peer architectures for exascale computing : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these

  10. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  11. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  12. Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures

    Directory of Open Access Journals (Sweden)

    Hossam M. Faheem

    2018-02-01

    Full Text Available Viewing a computationally-intensive problem as a self-contained challenge with its own hardware, software and scheduling strategies is an approach that should be investigated. We might suggest assigning heterogeneous hardware architectures to solve a problem, while parallel computing paradigms may play an important role in writing efficient code to solve the problem; moreover, the scheduling strategies may be examined as a possible solution. Depending on the problem complexity, finding the best possible solution using an integrated infrastructure of hardware, software and scheduling strategy can be a complex job. Developing and using ontologies and reasoning techniques play a significant role in reducing the complexity of identifying the components of such integrated infrastructures. Undertaking reasoning and inferencing regarding the domain concepts can help to find the best possible solution through a combination of hardware, software and scheduling strategies. In this paper, we present an ontology and show how we can use it to solve computationally-intensive problems from various domains. As a potential use for the idea, we present examples from the bioinformatics domain. Validation by using problems from the Elastic Optical Network domain has demonstrated the flexibility of the suggested ontology and its suitability for use with any other computationally-intensive problem domain.

  13. CaKernel – A Parallel Application Programming Framework for Heterogenous Computing Architectures

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz

    2011-01-01

    Full Text Available With the recent advent of new heterogeneous computing architectures there is still a lack of parallel problem solving environments that can help scientists to use easily and efficiently hybrid supercomputers. Many scientific simulations that use structured grids to solve partial differential equations in fact rely on stencil computations. Stencil computations have become crucial in solving many challenging problems in various domains, e.g., engineering or physics. Although many parallel stencil computing approaches have been proposed, in most cases they solve only particular problems. As a result, scientists are struggling when it comes to the subject of implementing a new stencil-based simulation, especially on high performance hybrid supercomputers. In response to the presented need we extend our previous work on a parallel programming framework for CUDA – CaCUDA that now supports OpenCL. We present CaKernel – a tool that simplifies the development of parallel scientific applications on hybrid systems. CaKernel is built on the highly scalable and portable Cactus framework. In the CaKernel framework, Cactus manages the inter-process communication via MPI while CaKernel manages the code running on Graphics Processing Units (GPUs and interactions between them. As a non-trivial test case we have developed a 3D CFD code to demonstrate the performance and scalability of the automatically generated code.

  14. Simple Systems - Complex Capacities. Integrative Processes of Computational Morphogenesis in Architecture

    Directory of Open Access Journals (Sweden)

    Achim Menges

    2011-11-01

    Full Text Available The complexity of the cultural, social, economical and particularly ecological context in which architecture is practised today necessitates design strategies and tactics that achieve a high level of integration of seemingly opposed demands and criteria within the material and construction systems we design. One possibility of unfolding novel synergies in such extreme conditions is to utilize the capacity of computers in the design process in an alternative way, one that foregrounds and instrumentalizes the innate capacities of materials, manufacturing and construction processes rather than merely elaborating form in the digital realm. The computational approach that will be presented here questions the nature of current design processes, but it is not a call for the replacement of the architect by computer driven design. Rather, under this approach, architects, instead of creating exuberant shapes subsequently rationalised for constructability and superimposed functions, are able to define specific material and construction systems by the combined logics of formation and materialisation encoded in generative processes of computational morphogenesis.

  15. Split Renal Function in Patients with Suspected Renal Artery Stenosis: a Comparison between Gamma Camera Renography and Two Methods of Measurement with Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerkman, H.; Ekloef, H.; Wadstroem, J.; Andersson, L.G.; Nyman, R.; Magnusson, A. [Uppsala Univ. (Sweden). Depts. of Oncology, Radiology and Clinical Immunology

    2006-02-15

    Purpose: To validate a method for calculating split renal function from computed tomography (CT) compared with gamma camera renography, and to test a new method for the measurement based on a volume-rendering technique. Material and Methods: Thirty-eight patients, aged 65.7{+-}11.6 (range 37.8-82.1) years, who had undergone both CT angiography and gamma camera renography for a suspected renal artery stenosis were included in this study. Split renal function was calculated from the CT examinations by measuring area and mean attenuation in the image slices of the kidneys, and also by measuring volume and mean attenuation from a 3D reconstruction of the kidneys. Gamma camera renography with 99m Tc-MAG3 with or without captopril enhancement was used as a reference. Results: The 2D CT method had good correlation with renography (r = 0.93). Mean difference was 4.7{+-}3.6 (0-12) percentage points per kidney. There was also excellent correlation between the two CT methods (r = 1.00). Conclusion: CT is equivalent to renography in determining split renal function, and the measurement from the CT examination can be made more quickly and equally accurately with a 3D technique.

  16. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  17. A Trusted Computing Architecture of Embedded System Based on Improved TPM

    Directory of Open Access Journals (Sweden)

    Wang Xiaosheng

    2017-01-01

    Full Text Available The Trusted Platform Module (TPM currently used by PCs is not suitable for embedded systems, it is necessary to improve existing TPM. The paper proposes a trusted computing architecture with new TPM and the cryptographic system developed by China for the embedded system. The improved TPM consists of the Embedded System Trusted Cryptography Module (eTCM and the Embedded System Trusted Platform Control Module (eTPCM, which are combined and implemented the TPM’s autonomous control, active defense, high-speed encryption/decryption and other function through its internal bus arbitration module and symmetric and asymmetric cryptographic engines to effectively protect the security of embedded system. In our improved TPM, a trusted measurement method with chain model and star type model is used. Finally, the improved TPM is designed by FPGA, and it is used to a trusted PDA to carry out experimental verification. Experiments show that the trusted architecture of the embedded system based on the improved TPM is efficient, reliable and secure.

  18. Computational memory architectures for autobiographic agents interacting in a complex virtual environment: a working model

    Science.gov (United States)

    Ho, Wan Ching; Dautenhahn, Kerstin; Nehaniv, Chrystopher

    2008-03-01

    In this paper, we discuss the concept of autobiographic agent and how memory may extend an agent's temporal horizon and increase its adaptability. These concepts are applied to an implementation of a scenario where agents are interacting in a complex virtual artificial life environment. We present computational memory architectures for autobiographic virtual agents that enable agents to retrieve meaningful information from their dynamic memories which increases their adaptation and survival in the environment. The design of the memory architectures, the agents, and the virtual environment are described in detail. Next, a series of experimental studies and their results are presented which show the adaptive advantage of autobiographic memory, i.e. from remembering significant experiences. Also, in a multi-agent scenario where agents can communicate via stories based on their autobiographic memory, it is found that new adaptive behaviours can emerge from an individual's reinterpretation of experiences received from other agents whereby higher communication frequency yields better group performance. An interface is described that visualises the memory contents of an agent. From an observer perspective, the agents' behaviours can be understood as individually structured, and temporally grounded, and, with the communication of experience, can be seen to rely on emergent mixed narrative reconstructions combining the experiences of several agents. This research leads to insights into how bottom-up story-telling and autobiographic reconstruction in autonomous, adaptive agents allow temporally grounded behaviour to emerge. The article concludes with a discussion of possible implications of this research direction for future autobiographic, narrative agents.

  19. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    Science.gov (United States)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  20. JPL control/structure interaction test bed real-time control computer architecture

    Science.gov (United States)

    Briggs, Hugh C.

    1989-01-01

    The Control/Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts - such as active structure - and new tools - such as combined structure and control optimization algorithm - and their verification in ground and possibly flight test. A focus mission spacecraft was designed based upon a space interferometer and is the basis for design of the ground test article. The ground test bed objectives include verification of the spacecraft design concepts, the active structure elements and certain design tools such as the new combined structures and controls optimization tool. In anticipation of CSI technology flight experiments, the test bed control electronics must emulate the computation capacity and control architectures of space qualifiable systems as well as the command and control networks that will be used to connect investigators with the flight experiment hardware. The Test Bed facility electronics were functionally partitioned into three units: a laboratory data acquisition system for structural parameter identification and performance verification; an experiment supervisory computer to oversee the experiment, monitor the environmental parameters and perform data logging; and a multilevel real-time control computing system. The design of the Test Bed electronics is presented along with hardware and software component descriptions. The system should break new ground in experimental control electronics and is of interest to anyone working in the verification of control concepts for large structures.

  1. A Development Architecture for Serious Games Using BCI (Brain Computer Interface Sensors

    Directory of Open Access Journals (Sweden)

    Kyhyun Um

    2012-11-01

    Full Text Available Games that use brainwaves via brain–computer interface (BCI devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.

  2. A development architecture for serious games using BCI (brain computer interface) sensors.

    Science.gov (United States)

    Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun

    2012-11-12

    Games that use brainwaves via brain-computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.

  3. From variability tolerance to approximate computing in parallel integrated architectures and accelerators

    CERN Document Server

    Rahimi, Abbas; Gupta, Rajesh K

    2017-01-01

    This book focuses on computing devices and their design at various levels to combat variability. The authors provide a review of key concepts with particular emphasis on timing errors caused by various variability sources. They discuss methods to predict and prevent, detect and correct, and finally conditions under which such errors can be accepted; they also consider their implications on cost, performance and quality. Coverage includes a comparative evaluation of methods for deployment across various layers of the system from circuits, architecture, to application software. These can be combined in various ways to achieve specific goals related to observability and controllability of the variability effects, providing means to achieve cross layer or hybrid resilience. · Covers challenges and opportunities in identifying microelectronic variability and the resulting errors at various layers in the system abstraction; · Enables readers to assess how various levels of circuit and system design can mitigate t...

  4. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  5. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  6. ''Beauty of Wholeness and Beauty of Partiality.'' New Terms Defining the Concept of Beauty in Architecture in Terms of Sustainability and Computer Aided Design

    Science.gov (United States)

    Farid, Ayman A.; Zaghloul, Weaam M.; Dewidar, Khaled M.

    2014-01-01

    The great shift in sustainability and computer aided design in the field of architecture caused a remarkable change in the architecture philosophy, new aspects of beauty and aesthetic values are being introduced, and traditional definitions for beauty cannot fully cover this aspects, which causes a gap between; new architecture works criticism and…

  7. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Science.gov (United States)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  8. Control bandwidth improvements in GRAVITY fringe tracker by switching to a synchronous real time computer architecture

    Science.gov (United States)

    Abuter, Roberto; Dembet, Roderick; Lacour, Sylvestre; di Lieto, Nicola; Woillez, Julien; Eisenhauer, Frank; Fedou, Pierre; Phan Duc, Than

    2016-08-01

    The new VLTI (Very Large Telescope Interferometer) 1 instrument GRAVITY5, 22, 23 is equipped with a fringe tracker16 able to stabilize the K-band fringes on six baselines at the same time. It has been designed to achieve a performance for average seeing conditions of a residual OPD (Optical Path Difference) lower than 300 nm with objects brighter than K = 10. The control loop implementing the tracking is composed of a four stage real time computer system compromising: a sensor where the detector pixels are read in and the OPD and GD (Group Delay) are calculated; a controller receiving the computed sensor quantities and producing commands for the piezo actuators; a concentrator which combines both the OPD commands with the real time tip/tilt corrections offloading them to the piezo actuator; and finally a Kalman15 parameter estimator. This last stage is used to monitor current measurements over a window of few seconds and estimate new values for the main Kalman15 control loop parameters. The hardware and software implementation of this design runs asynchronously and communicates the four computers for data transfer via the Reflective Memory Network3. With the purpose of improving the performance of the GRAVITY5, 23 fringe tracking16, 22 control loop, a deviation from the standard asynchronous communication mechanism has been proposed and implemented. This new scheme operates the four independent real time computers involved in the tracking loop synchronously using the Reflective Memory Interrupts2 as the coordination signal. This synchronous mechanism had the effect of reducing the total pure delay of the loop from 3.5 [ms] to 2.0 [ms] which then translates on a better stabilization of the fringes as the bandwidth of the system is substantially improved. This paper will explain in detail the real time architecture of the fringe tracker in both is synchronous and synchronous implementation. The achieved improvements on reducing the delay via this mechanism will be

  9. Neuromorphic Computing, Architectures, Models, and Applications. A Beyond-CMOS Approach to Future Computing, June 29-July 1, 2016, Oak Ridge, TN

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Schuman, Catherine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hylton, Todd [Brain Corporation, San Diego, CA (United States); Li, Hai [Univ. of Pittsburgh, PA (United States); Pino, Robinson [US Dept. of Energy, Washington, DC (United States)

    2016-12-31

    The White House and Department of Energy have been instrumental in driving the development of a neuromorphic computing program to help the United States continue its lead in basic research into (1) Beyond Exascale—high performance computing beyond Moore’s Law and von Neumann architectures, (2) Scientific Discovery—new paradigms for understanding increasingly large and complex scientific data, and (3) Emerging Architectures—assessing the potential of neuromorphic and quantum architectures. Neuromorphic computing spans a broad range of scientific disciplines from materials science to devices, to computer science, to neuroscience, all of which are required to solve the neuromorphic computing grand challenge. In our workshop we focus on the computer science aspects, specifically from a neuromorphic device through an application. Neuromorphic devices present a very different paradigm to the computer science community from traditional von Neumann architectures, which raises six major questions about building a neuromorphic application from the device level. We used these fundamental questions to organize the workshop program and to direct the workshop panels and discussions. From the white papers, presentations, panels, and discussions, there emerged several recommendations on how to proceed.

  10. Splitting Descartes

    DEFF Research Database (Denmark)

    Schilhab, Theresa

    2007-01-01

    Kognition og Pædagogik vol. 48:10-18. 2003 Short description : The cognitivistic paradigm and Descartes' view of embodied knowledge. Abstract: That the philosopher Descartes separated the mind from the body is hardly news: He did it so effectively that his name is forever tied to that division....... But what exactly is Descartes' point? How does the Kartesian split hold up to recent biologically based learning theories?...

  11. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  12. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  13. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    Science.gov (United States)

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the…

  14. Proceedings of the 2nd International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC 2008)

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Unknown, [Unknown

    2008-01-01

    This volume contains the proceedings of the Second International Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC 2008), held on July 5 in Porto, Portugal, in conjunction with the Third International Conference on Software and Data Technologies (ICSOFT

  15. Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism

    Energy Technology Data Exchange (ETDEWEB)

    Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)

    2005-07-01

    The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.

  16. ANL/Star project: a new architecture for large scale theoretical physics computations

    Energy Technology Data Exchange (ETDEWEB)

    Rushton, A.M.

    1985-01-01

    The project reported consists of two phases, each of which has goals of substantial physics content on its own. In Phase 1, we have selected Star Technologies' ST-100 as the array processor for the prototype coupled system and have installed one on a Vax 11/750 host. Our goals with this system are to institute a substantial program in computational physics at Argonne based on the power provided by this system and thereby to gain experience with both the hardware and software architecture of the ST-100. In Phase II, we propose to build a prototype consisting of two coupled array processors with shared memory to prove that this design can achieve high speed and efficiency in a readily extensible and cost-effective manner. This will implement all of the hardware and software modifications necessary to extend this design to as many as 64 (or more) nodes. In our design, we seek to minimize the changes made in the standard system hardware and software; this drastically reduces the effort required by our group to implement such a design and enables us to more readily incorporate the companies' upgrades to the array processor. It should be emphasized that our design is intended as a special purpose system for theoretical calculations; however it can be efficiently applied to a surprisingly broad class of problems. I shall discuss first the architecture of the ST-100 and then the physics program being currently implemented on a single system. Finally the proposed design of the coupled system is presented.

  17. ANL/Star project: a new architecture for large scale theoretical physics computations

    International Nuclear Information System (INIS)

    Rushton, A.M.

    1985-01-01

    The project reported consists of two phases, each of which has goals of substantial physics content on its own. In Phase 1, we have selected Star Technologies' ST-100 as the array processor for the prototype coupled system and have installed one on a Vax 11/750 host. Our goals with this system are to institute a substantial program in computational physics at Argonne based on the power provided by this system and thereby to gain experience with both the hardware and software architecture of the ST-100. In Phase II, we propose to build a prototype consisting of two coupled array processors with shared memory to prove that this design can achieve high speed and efficiency in a readily extensible and cost-effective manner. This will implement all of the hardware and software modifications necessary to extend this design to as many as 64 (or more) nodes. In our design, we seek to minimize the changes made in the standard system hardware and software; this drastically reduces the effort required by our group to implement such a design and enables us to more readily incorporate the companies' upgrades to the array processor. It should be emphasized that our design is intended as a special purpose system for theoretical calculations; however it can be efficiently applied to a surprisingly broad class of problems. I shall discuss first the architecture of the ST-100 and then the physics program being currently implemented on a single system. Finally the proposed design of the coupled system is presented

  18. The Use of MARIE CPU Simulator in Computer Architecture Course: A Case Study of Student's Perception of Learning and Performance

    Directory of Open Access Journals (Sweden)

    Jorge Fernando Maxnuck Soares

    2016-12-01

    Full Text Available This study aims to show results of employing a case study in the use of Active Learning Practices in the Computer Architecture discipline. The practice in question is the use of Marie® CPU Simulator as a practical tool in the development of the course. The methodology of the study aims to verify whether the use of Marie® CPU Simulator contributes to improving the learning of the Computer Architecture discipline, especially whether it provides a better understanding of the parts that integrate the architecture of a given CPU, with an explanation of the function of the parts, and their interrelationship. This study shows the first results of a more comprehensive study on the use of active learning practices, using software in high-tech disciplines of an information system course. The secondary purpose is to show the application of the case study as a methodology outside the usual areas, such as: medicine, psychology and business administration. This study seeks to show the advantages and limitations found, highlighting its potential in the academic field in relation to the use of active learning practices in lessons of technical subjects, such as Computer Architecture, without losing scientific thoroughness in data processing and in the research methodology.

  19. A neuron-inspired computational architecture for spatiotemporal visual processing: real-time visual sensory integration for humanoid robots.

    Science.gov (United States)

    Holzbach, Andreas; Cheng, Gordon

    2014-06-01

    In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture's focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations - making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to ∼+6%).

  20. The advantage of the three dimensional computed tomographic (3 D-CT for ensuring accurate bone incision in sagittal split ramus osteotomy

    Directory of Open Access Journals (Sweden)

    Coen Pramono D

    2005-03-01

    Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.

  1. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    International Nuclear Information System (INIS)

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-01-01

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  2. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    Science.gov (United States)

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.

  3. Irreversibility of T-Cell Specification: Insights from Computational Modelling of a Minimal Network Architecture.

    Directory of Open Access Journals (Sweden)

    Erica Manesso

    Full Text Available A cascade of gene activations under the control of Notch signalling is required during T-cell specification, when T-cell precursors gradually lose the potential to undertake other fates and become fully committed to the T-cell lineage. We elucidate how the gene/protein dynamics for a core transcriptional module governs this important process by computational means.We first assembled existing knowledge about transcription factors known to be important for T-cell specification to form a minimal core module consisting of TCF-1, GATA-3, BCL11B, and PU.1 aiming at dynamical modeling. Model architecture was based on published experimental measurements of the effects on each factor when each of the others is perturbed. While several studies provided gene expression measurements at different stages of T-cell development, pure time series are not available, thus precluding a straightforward study of the dynamical interactions among these genes. We therefore translate stage dependent data into time series. A feed-forward motif with multiple positive feed-backs can account for the observed delay between BCL11B versus TCF-1 and GATA-3 activation by Notch signalling. With a novel computational approach, all 32 possible interactions among Notch signalling, TCF-1, and GATA-3 are explored by translating combinatorial logic expressions into differential equations for BCL11B production rate.Our analysis reveals that only 3 of 32 possible configurations, where GATA-3 works as a dimer, are able to explain not only the time delay, but very importantly, also give rise to irreversibility. The winning models explain the data within the 95% confidence region and are consistent with regard to decay rates.This first generation model for early T-cell specification has relatively few players. Yet it explains the gradual transition into a committed state with no return. Encoding logics in a rate equation setting allows determination of binding properties beyond what is

  4. Open Computer Forensic Architecture a Way to Process Terabytes of Forensic Disk Images

    Science.gov (United States)

    Vermaas, Oscar; Simons, Joep; Meijer, Rob

    This chapter describes the Open Computer Forensics Architecture (OCFA), an automated system that dissects complex file types, extracts metadata from files and ultimately creates indexes on forensic images of seized computers. It consists of a set of collaborating processes, called modules. Each module is specialized in processing a certain file type. When it receives a so called 'evidence', the information that has been extracted so far about the file together with the actual data, it either adds new information about the file or uses the file to derive a new 'evidence'. All evidence, original and derived, is sent to a router after being processed by a particular module. The router decides which module should process the evidence next, based upon the metadata associated with the evidence. Thus the OCFA system can recursively process images until from every compound file the embedded files, if any, are extracted, all information that the system can derive, has been derived and all extracted text is indexed. Compound files include, but are not limited to, archive- and zip-files, disk images, text documents of various formats and, for example, mailboxes. The output of an OCFA run is a repository full of derived files, a database containing all extracted information about the files and an index which can be used when searching. This is presented in a web interface. Moreover, processed data is easily fed to third party software for further analysis or to be used in data mining or text mining-tools. The main advantages of the OCFA system are Scalability, it is able to process large amounts of data.

  5. High-speed, automatic controller design considerations for integrating array processor, multi-microprocessor, and host computer system architectures

    Science.gov (United States)

    Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.

    1985-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.

  6. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    Science.gov (United States)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  7. Computer-aided tissue engineering: benefiting from the control over scaffold micro-architecture.

    Science.gov (United States)

    Tarawneh, Ahmad M; Wettergreen, Matthew; Liebschner, Michael A K

    2012-01-01

    Minimization schema in nature affects the material arrangements of most objects, independent of scale. The field of cellular solids has focused on the generalization of these natural architectures (bone, wood, coral, cork, honeycombs) for material improvement and elucidation into natural growth mechanisms. We applied this approach for the comparison of a set of complex three-dimensional (3D) architectures containing the same material volume but dissimilar architectural arrangements. Ball and stick representations of these architectures at varied material volumes were characterized according to geometric properties, such as beam length, beam diameter, surface area, space filling efficiency, and pore volume. Modulus, deformation properties, and stress distributions as contributed solely by architectural arrangements was revealed through finite element simulations. We demonstrated that while density is the greatest factor in controlling modulus, optimal material arrangement could result in equal modulus values even with volumetric discrepancies of up to 10%. We showed that at low porosities, loss of architectural complexity allows these architectures to be modeled as closed celled solids. At these lower porosities, the smaller pores do not greatly contribute to the overall modulus of the architectures and that a stress backbone is responsible for the modulus. Our results further indicated that when considering a deposition-based growth pattern, such as occurs in nature, surface area plays a large role in the resulting strength of these architectures, specifically for systems like bone. This completed study represents the first step towards the development of mathematical algorithms to describe the mechanical properties of regular and symmetric architectures used for tissue regenerative applications. The eventual goal is to create logical set of rules that can explain the structural properties of an architecture based solely upon its geometry. The information could

  8. Cloud/Fog Computing System Architecture and Key Technologies for South-North Water Transfer Project Safety

    Directory of Open Access Journals (Sweden)

    Yaoling Fan

    2018-01-01

    Full Text Available In view of the real-time and distributed features of Internet of Things (IoT safety system in water conservancy engineering, this study proposed a new safety system architecture for water conservancy engineering based on cloud/fog computing and put forward a method of data reliability detection for the false alarm caused by false abnormal data from the bottom sensors. Designed for the South-North Water Transfer Project (SNWTP, the architecture integrated project safety, water quality safety, and human safety. Using IoT devices, fog computing layer was constructed between cloud server and safety detection devices in water conservancy projects. Technologies such as real-time sensing, intelligent processing, and information interconnection were developed. Therefore, accurate forecasting, accurate positioning, and efficient management were implemented as required by safety prevention of the SNWTP, and safety protection of water conservancy projects was effectively improved, and intelligential water conservancy engineering was developed.

  9. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

    Science.gov (United States)

    Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel

    2016-01-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance

  10. Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching

    Energy Technology Data Exchange (ETDEWEB)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    2012-12-28

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.

  11. Cone-beam computed tomography evaluation on the condylar displacement following sagittal split ramus osteotomy in asymmetric setback patients: Comparison between conventional approach and surgery-first approach.

    Science.gov (United States)

    Oh, Min-Hee; Hwang, Hyeon-Shik; Lee, Kyung-Min; Cho, Jin-Hyoung

    2017-09-01

    To compare the condylar displacement following sagittal split ramus osteotomy (SSRO) in asymmetric setback patients between the conventional approach and surgery-first approach and to determine whether the condylar displacement is affected by asymmetric setback in SSRO patients. This was a retrospective study. The subjects consisted of patients with facial asymmetry who underwent SSRO and had cone-beam computed tomography taken before and 1 month after surgery. They were allocated into the conventional (n = 18) and surgery-first (SF) groups (n = 20). Descriptive, independent t-tests and Pearson correlation analysis were computed. The amount of condylar displacement in x-, y-, and z-directions and Euclidean distance showed no statistically significant differences between the conventional and SF groups. Comparing the postoperative condylar position with the preoperative position, the condylar displacement occurred in posterior (P groups except on the deviated side in the conventional group. The condylar displacement occurred in a posterior (P group. However, the condylar displacement in three dimensions showed no statistically significant differences between the two groups. In the correlation analysis, the condylar displacement in both the deviated and contralateral sides showed no significant correlation with asymmetric setback in either group. The condylar displacement in three dimensions and the distance of condylar displacement in SSRO patients with facial asymmetry showed no significant difference between conventional and SF groups. Condylar displacement was not associated with asymmetric setback.

  12. Micro-computed tomography assessment of human alveolar bone: bone density and three-dimensional micro-architecture.

    Science.gov (United States)

    Kim, Yoon Jeong; Henkin, Jeffrey

    2015-04-01

    Micro-computed tomography (micro-CT) is a valuable means to evaluate and secure information related to bone density and quality in human necropsy samples and small live animals. The aim of this study was to assess the bone density of the alveolar jaw bones in human cadaver, using micro-CT. The correlation between bone density and three-dimensional micro architecture of trabecular bone was evaluated. Thirty-four human cadaver jaw bone specimens were harvested. Each specimen was scanned with micro-CT at resolution of 10.5 μm. The bone volume fraction (BV/TV) and the bone mineral density (BMD) value within a volume of interest were measured. The three-dimensional micro architecture of trabecular bone was assessed. All the parameters in the maxilla and the mandible were subject to comparison. The variables for the bone density and the three-dimensional micro architecture were analyzed for nonparametric correlation using Spearman's rho at the significance level of p architecture parameters were consistently higher in the mandible, up to 3.3 times greater than those in the maxilla. The most linear correlation was observed between BV/TV and BMD, with Spearman's rho = 0.99 (p = .01). Both BV/TV and BMD were highly correlated with all micro architecture parameters with Spearman's rho above 0.74 (p = .01). Two aspects of bone density using micro-CT, the BV/TV and BMD, are highly correlated with three-dimensional micro architecture parameters, which represent the quality of trabecular bone. This noninvasive method may adequately enhance evaluation of the alveolar bone. © 2013 Wiley Periodicals, Inc.

  13. A 3D architecture platform dedicated to high-speed computation for power system

    OpenAIRE

    Fabre, Laurent; Sallin, Denis; Lanz, Guillaume; Kyriakidis, Theodoros; Nagel, Ira; Cherkaoui, Rachid; Kayal, Maher

    2013-01-01

    This paper presents an innovative 3D hardware architecture for power system dynamic and transient stability. Based on an intrinsic parallel architecture by means of mixedsignal circuits (analog and digital) it overcomes the speed of numerical simulators for given models. This approach does not competing the accuracy and model complexity of the high performance numerical simulators. It intends to complement them with the advantage of speed, low-cost, portability and autonomous functions. The p...

  14. Designing fault-tolerant real-time computer systems with diversified bus architecture for nuclear power plants

    International Nuclear Information System (INIS)

    Behera, Rajendra Prasad; Murali, N.; Satya Murty, S.A.V.

    2014-01-01

    Fault-tolerant real-time computer (FT-RTC) systems are widely used to perform safe operation of nuclear power plants (NPP) and safe shutdown in the event of any untoward situation. Design requirements for such systems need high reliability, availability, computational ability for measurement via sensors, control action via actuators, data communication and human interface via keyboard or display. All these attributes of FT-RTC systems are required to be implemented using best known methods such as redundant system design using diversified bus architecture to avoid common cause failure, fail-safe design to avoid unsafe failure and diagnostic features to validate system operation. In this context, the system designer must select efficient as well as highly reliable diversified bus architecture in order to realize fault-tolerant system design. This paper presents a comparative study between CompactPCI bus and Versa Module Eurocard (VME) bus architecture for designing FT-RTC systems with switch over logic system (SOLS) for NPP. (author)

  15. [Evaluation of bone architecture and biomechanic properties by peripheral quantitative computed tomography in rats].

    Science.gov (United States)

    Xing, Xiao-ping; Xia, Wei-bo; Meng, Xun-wu; Zhou, Xue-ying; Hu, Ying-ying; Liu, Huai-cheng

    2003-05-10

    To evaluate the value of peripheral quantitative computed tomography (pQCT) in measuring bone architecture and biomechanic properties. 50 virgin female Wistar rats six months old were randomly divided into 4 groups: (1) 8 rats were killed as baseline group; (2) 8 rats underwent sham operation and then were killed 14 weeks after (sham operation group); (3) 16 rats underwent bilateral ovariectomy (OVX) without further intervention. Six and 14 weeks after the operation each 8 rats were killed (OVX group); and (4) 18 rats underwent OVX too. After the OVX 9 of the 18 rats were treated with 17beta-estradiol 20 micro g/kg/d IH and 9 rats were treated with estradiol valerate 800 micro g/kg/d po for 8 weeks respectively. Then the 18 rats were killed (OVX plus estrogen group, O + E group). The right tibiae of the rats were taken for histomorphometric analysis, and the right femora were prepared for pQCT scanning and bone biomechanical measurement with indentation test and three-point bending test. Histomorphometric analysis showed that the trabecular volume of proximal tibia (Cn-BV/TV) in the OVX group was 8.1 +/- 1.4%, significantly lower than that in the sham operation group (19.5 +/- 1.5%, P biomechanic properties in measured by three point test after OVX and estrogen treatment. A significant positive correlation was shown between Trab BMD and Cn-TV/BV and between Trab BMD and Tb N (r = 0.88 and 0.73, both P < 0.01). Similarly, both Trab BMC and Trab BMD of the femur were significantly correlated with the Can load and Can Stiff determined by indentation test (r = 0.47 - 0.68, all P < 0.01). There was also a significant correlation of parameters measured by pQCT in cortical bone with the maximal load and stiffness for the femur midshaft, and the best correlation was found between the maximal load of femur midshaft and Crt BMC and Crt A (both r = 0.76 and P < 0.01). The geometric, densitometric and mechanical properties in cortical and trabecular bones of rat can be well

  16. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  17. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    International Nuclear Information System (INIS)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  18. Using Computer Modelling and Virtual Reality to Explore the Ideological Dimensions of Thule Whalebone Architecture in Arctic Canada

    Directory of Open Access Journals (Sweden)

    Peter C. Dawson

    2005-09-01

    Full Text Available Arctic archaeologists have long suspected that the whalebones used to construct semi-subterranean winter houses by Thule culture peoples were symbolically resonant. These assumptions are based on observations of the non-utilitarian use of jaw bones and crania in Thule house ruins, and ethnographic descriptions of architectural symbolism relating to the whale hunt in Historic Alaskan Inupiat houses. In this paper, we use a 3-dimensional computer reconstruction of a semi-subterranean whalebone house to search for visual expressions of whaling-related ritual in Thule architecture. Results suggest that the whalebone superstructure may have been designed to evoke important themes when viewed from specific locations within the house, and under different lighting conditions. These themes, which appear in Inupiat myths and stories, involve the belief that women transform houses into living whales during the time of the hunt.

  19. Split School of High Energy Physics 2015

    CERN Document Server

    2015-01-01

    Split School of High Energy Physics 2015 (SSHEP 2015) was held at the Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB), University of Split, from September 14 to September 18, 2015. SSHEP 2015 aimed at master and PhD students who were interested in topics pertaining to High Energy Physics. SSHEP 2015 is the sixth edition of the High Energy Physics School. Previous five editions were held at the Department of Physics, University of Sarajevo, Bosnia and Herzegovina.

  20. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  1. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  2. Delta: An object-oriented finite element code architecture for massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  3. Compiling for Novel Scratch Pad Memory based Multicore Architectures for Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Aviral

    2016-02-05

    The objective of this proposal is to develop tools and techniques (in the compiler) to manage data of a task and communication among tasks on the scratch pad memory (SPM) of the core, so that any application (a set of tasks) can be executed efficiently on an SPM based manycore architecture.

  4. Designing with Space Syntax : A configurative approach to architectural layout, proposing a computational methodology

    NARCIS (Netherlands)

    Nourian, P.; Rezvani, S.; Sariyildiz, I.S.

    2013-01-01

    This paper introduces a design methodology and a toolkit developed as a parametric CAD program for configurative design of architectural plan layouts. Using this toolkit, designers can start plan layout process with sketching the way functional spaces need to connect to each other. A tool draws an

  5. Issues of Control and Command in Digital Design and Architectural Computation

    NARCIS (Netherlands)

    Chaszar, A.T.

    2016-01-01

    Issues of control and command in architecture are considered here via reflections on recent and current research projects concerning digital technologies. The projects’ topics cover a range of scales and approaches, from the planning and design of urban ensembles to the detailing of panels for

  6. Iterative Splitting Methods for Differential Equations

    CERN Document Server

    Geiser, Juergen

    2011-01-01

    Iterative Splitting Methods for Differential Equations explains how to solve evolution equations via novel iterative-based splitting methods that efficiently use computational and memory resources. It focuses on systems of parabolic and hyperbolic equations, including convection-diffusion-reaction equations, heat equations, and wave equations. In the theoretical part of the book, the author discusses the main theorems and results of the stability and consistency analysis for ordinary differential equations. He then presents extensions of the iterative splitting methods to partial differential

  7. Compiling for Application Specific Computational Acceleration in Reconfigurable Architectures Final Report CRADA No. TSB-2033-01

    Energy Technology Data Exchange (ETDEWEB)

    De Supinski, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Caliga, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.

  8. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    Science.gov (United States)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  9. Formal computer-aided product family architecture design for mass customization

    DEFF Research Database (Denmark)

    Bonev, Martin; Hvam, Lars; Clarkson, John

    2015-01-01

    are informal, providing limited support for domain experts to communicate, synthesize and document architectures effectively. In single product design explicit visual models such as design structure matrices and node-link diagrams have been used in combination with structural analysis methods to overcome......With product customization companies aim at creating higher customer value and stronger economic benefits. The profitability of the offered variety relies on the quality of the developed product family architectures and their consistent implementation in configuration systems. Yet existing methods...... the limitation of the informal approach. Drawing on thereto established best practises, this paper evaluates and extends the relevant methods and modelling techniques, to create a consistent and formal approach for the design and customization of entire product families. To validate it’s applicability...

  10. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    Science.gov (United States)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  11. A Very Compact AES-SPIHT Selective Encryption Computer Architecture Design with Improved S-Box

    Directory of Open Access Journals (Sweden)

    Jia Hao Kong

    2013-01-01

    Full Text Available The “S-box” algorithm is a key component in the Advanced Encryption Standard (AES due to its nonlinear property. Various implementation approaches have been researched and discussed meeting stringent application goals (such as low power, high throughput, low area, but the ultimate goal for many researchers is to find a compact and small hardware footprint for the S-box circuit. In this paper, we present our version of minimized S-box with two separate proposals and improvements in the overall gate count. The compact S-box is adopted with a compact and optimum processor architecture specifically tailored for the AES, namely, the compact instruction set architecture (CISA. To further justify and strengthen the purpose of the compact crypto-processor’s application, we have also presented a selective encryption architecture (SEA which incorporates the CISA as a part of the encryption core, accompanied by the set partitioning in hierarchical trees (SPIHT algorithm as a complete selective encryption system.

  12. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    Science.gov (United States)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  13. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  14. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  15. Cloud computing solutions for the Marine Corps: an architecture to support expeditionary logistics

    OpenAIRE

    Ibatuan, Charles R., II

    2013-01-01

    Approved for public release; distribution is unlimited The Department of Defense (DoD) is planning an aggressive move toward cloud computing technologies. This concept has been floating around the private information technology sector for a number of years and has benefited organizations with cost savings, increased efficiencies, and flexibility by sharing computer resources through networked connections. The push for cloud computing has been driven by the 25 Point Implementation Plan to R...

  16. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    Energy Technology Data Exchange (ETDEWEB)

    De, Arup [Univ. of California, San Diego, CA (United States)

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interface (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.

  17. Inter-computer communication architecture for a mixed redundancy distributed system

    Science.gov (United States)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  18. Communicational Architecture and Computational Processing Robustness on Distributed Controller System inReconfigurable Brachiating Space Robot

    Science.gov (United States)

    Yamamoto, Hiroshi; Matunaga, Saburo

    Reconfigurable Brachiating space Robot consists of three 6-DOF arms to support various kinds of external vehicle activities by changing its arm configuration. This kind of robots requires topology-change adaptation in communicational system as well as mechanical composition. Distributed controller system is employed to realize its objectives and this paper discusses its communicational architecture that we have designed. Moreover, fault resilience method in the distributed system with several micro processing units is proposed. It targets realizing high availability on data processing function using process takeover and parallelism by software.

  19. Neuromorphic Computing: A Post-Moore's Law Complementary Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Birdwell, John Douglas [University of Tennessee (UT); Dean, Mark [University of Tennessee (UT); Plank, James [University of Tennessee (UT); Rose, Garrett [University of Tennessee (UT)

    2016-01-01

    We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.

  20. Architecture and pervasive Computing when buildings and design artifacts become popular interfaces

    DEFF Research Database (Denmark)

    Krogh, Peter Gall; Grønbæk, Kaj

    2001-01-01

    computing we are on the brink of an even greater increase: IT components and systems for intelligent buildings will change from being proprietary, specialized solutions with a narrow market to be part of the developing mainstream mass market for pervasive computing. This paper will illustrate...

  1. Creating science-driven computer architecture: A new path to scientific leadership

    Energy Technology Data Exchange (ETDEWEB)

    McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

    2002-10-14

    This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

  2. The Use of Computer Tools in the Design Process of Students’ Architectural Projects. Case Studies in Algeria

    Science.gov (United States)

    Saighi, Ouafa; Salah Zerouala, Mohamed

    2017-12-01

    This The paper particularly deals with the way in which computer tools are used by students in their design studio’s projects. Four institutions of architecture education in Algeria are considered as a case study to evaluate the impact of such tools on student design process. This aims to inspect in depth such use, to sort out its advantages and shortcomings in order to suggest some solutions. A field survey was undertaken on a sample of students and their teachers at the same institutions. The analysed results mainly show that computer tools are highly focusing on improving the quality of drawings representation and images seeking observers’ satisfaction hence influencing their decision. Some teachers are not very keen to overuse the computer during the design phase; they prefer the “traditional” approach. This is the present situation that Algerian university is facing which leads to conflict and disagreement between students and teachers. Meanwhile, there was no doubt that computer tools have effectively contributed to improve the competitive level among students.

  3. RATS: Reactive Architectures

    National Research Council Canada - National Science Library

    Christensen, Marc

    2004-01-01

    This project had two goals: To build an emulation prototype board for a tiled architecture and to demonstrate the utility of a global inter-chip free-space photonic interconnection fabric for polymorphous computer architectures (PCA...

  4. TelcoFog: A unified flexible fog and cloud computing architecture for 5G networks

    OpenAIRE

    Vilalta, Ricard; López Álvarez, Victor; Giorgetti, Alessio; Peng, Shuping; Orsini, Vittorio; Velasco Esteban, Luis Domingo; Serral Gracià, René; Morris, Donald; Fina, Silvia de; Cugini, Filippo; Castoldi, Piero; Mayoral, Arturo; Casellas Regi, Ramón; Martínez, Ricardo; Verikoukis, Christos

    2017-01-01

    Telecom operators require cloud computing and storage infrastructures, integrated with their heterogeneous access and transport networks, in order to provide software defined networking (SDN), network functions virtualization (NFV), mobile edge computing (MEC), and cloud radio access network (C-RAN) for future 5G services. Virtualized functions (e.g., mobile Evolved Packet Core — EPC, firewall, local cache, video analytics, video storage, central cache, virtual base station, virtualized BBU-B...

  5. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    Science.gov (United States)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  6. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  7. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  8. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  9. Analysis of Critical Characteristics for Safety Graded Personnel Computers in the KNICS Architecture

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Lee, Dong Young

    2009-01-01

    Critical characteristics analysis of a safety related item is to identify characteristics to be verified to replace an original item with the dedicated item. It is sure that the dedicated item meeting critical characteristics would perform its intended safety function instead of the specified item. KNICS project developed two safety systems: IDiPS RPS (Reactor Protection System) and IDiPS ESF-CCS (Engineered Safety Features-Component Control System). Two safety systems of IDiPS are equipped with personnel computers, so-called COMs (Cabinet Operator Modules), in their cabinets. The personnel computers, COMs, are responsible for safety system monitoring, testing, and maintaining. Even though two safety systems are safety critical system, the personnel computers of two systems, i.e. COMs, are not graded as safety-graded items. Regulation requirements are expected to be strengthened, and the functions of the personnel computer may be enhanced to include safety-related functions and safety functions, it would be necessary that the grade of the personnel computers is adjusted to a higher level, the safety grade. To try to upgrade a non safety system, i.e. COMs, to a safety system, its safety functions and requirements, i.e. critical characteristics, must be identified and verified. This paper describes the process of the identification of critical characteristics and the results of analysis

  10. Putting all that (HEP-) data to work - a REAL implementation of an unlimited computing and storage architecture

    International Nuclear Information System (INIS)

    Ernst, Michael

    1996-01-01

    Since computing in HEP left the Mainframe-Path, many institutions demonstrated a successful migration to workstation-based computing, especially for applications requiring a high CPU-to-I/O ratio. However, the difficulties and the complexity starts beyond just providing CPU-Cycles. Critical applications, requiring either sequential access to large amounts of data or to many small sets out of a multi 10-Terabyte Data Repository need technical approaches we have not had so far. Though we felt that we were hardly able to follow technology evolving in the various fields, we recently had to realize that even politics overtook technical evolution - at least in the areas mentioned above. The USA is making peace with Russia. DEC is talking to IBM, SGI communicating with HP. All these things became true, and through, unfortunately, the Cold War lasted 50 years, and-in a relative sense-we were afraid that 50 years seemed to be how long any self respecting high performance computer (or a set of workstations) had to wait for data from its Server, fortunately, we are now facing a similar progress of friendliness, harmony and balance in the former problematic (computing) areas. Buzzwords, mentioned many thousand times in talks describing today's and future requirements, including Functionality, Reliability, Scalability, Modularity and Portability are not just phrases, wishes and dreams any longer. At DESY, we are in the process of demonstrating an architecture that is taking those five issues equally into consideration, including Heterogeneous Computing Platforms with ultimate file system approaches, Heterogeneous Mass Storage Devices and an Open Distributed Hierarchical Mass Storage Management System. This contribution will provide an overview on how far we got and what the next steps will be. (author)

  11. Epidemic Protocols for Pervasive Computing Systems - Moving Focus from Architecture to Protocol

    DEFF Research Database (Denmark)

    Mogensen, Martin

    2009-01-01

    Pervasive computing systems are inherently running on unstable networks and devices, subject to constant topology changes, network failures, and high churn. For this reason, pervasive computing infrastructures need to handle these issues as part of their design. This is, however, not feasible...... epidemic protocols as distribution mechanism for pervasive systems. The nature of epidemic protocols make them easy to implement, easy to deploy, and resilient to failures. By using epidemic protocols, it is possible to mitigate a wide range of the potential issues on the protocol layer. The result...... is lower complexity of building pervasive systems and higher robustness....

  12. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    -based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.

  13. Splitting methods in communication, imaging, science, and engineering

    CERN Document Server

    Osher, Stanley; Yin, Wotao

    2016-01-01

    This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas. .

  14. The coupling of fluids, dynamics, and controls on advanced architecture computers

    Science.gov (United States)

    Atwood, Christopher

    1995-01-01

    This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.

  15. Hybrid Computational Architecture for Multi-Scale Modeling of Materials and Devices

    Science.gov (United States)

    2016-01-03

    computing node that included two GPU processors ( TESLA K40). We did initially test the GPUs for a number of software applications that we normally use...defects in GaN”, Submitted to Phys. Rev. B. Under review. 2) M. Matsubara and E. Bellotti, “A first-principles study of carbon-related energy levels in

  16. YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning

    Science.gov (United States)

    Mustafa, Besim

    2013-01-01

    A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…

  17. Three-Dimensional Nanobiocomputing Architectures with aleph-Hypercells: Revolutionary Super-High-Performance Computing Platform

    Science.gov (United States)

    2006-05-01

    the switching power is not only a function of devices/gates/switches (bipolar junction transistors and field-effect transistors , BJTs and FETs, for...Acronyms 2D - Two-dimensional 3D - Three-dimensional ALU - Arithmetic logic unit BJT - Bipolar junction transistor CAD - Computer...14 3. 4. DNA Derivative Transistor for ℵ-Hypercells

  18. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  19. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  20. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    International Nuclear Information System (INIS)

    Andrade, Xavier; Aspuru-Guzik, Alán; Alberdi-Rodriguez, Joseba; Rubio, Angel; Strubbe, David A; Louie, Steven G; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Marques, Miguel A L

    2012-01-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures. (topical review)

  1. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  2. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    Science.gov (United States)

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  3. Mobile computation offloading architecture for mobile augmented reality, case study: Visualization of cetacean skeleton

    OpenAIRE

    Belen G. Rodriguez-Santana; Amilcar Meneses Viveros; Blanca Esther Carvajal-Gamez; Diana Carolina Trejo-Osorio

    2016-01-01

    Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on device...

  4. Cloud Computing Solutions for the Marine Corps: An Architecture to Support Expeditionary Logistics

    Science.gov (United States)

    2013-09-01

    equipment and supplies) through item unique identification (IUID), radio frequency identification ( RFID ), automated information technologies (AIT...Ordnance Information System (OIB)• unit:; to .s:ubmit a WIR 11.•ithout having to generate a naval fllellage • Warehouse mi.nagement system which...manage~ warehouse operations through integration of dedicated localized computer hardware, radio trequency commun1cat1one, automatic identi fication

  5. Global image processing operations on parallel architectures

    Science.gov (United States)

    Webb, Jon A.

    1990-09-01

    Image processing operations fall into two classes: local and global. Local operations affect only a small corresponding area in the output image, and include edge detection, smoothing, and point operations. In global operations any input pixel can affect any or a large number of output data. Global operations include histogram, image warping, Hough transform, and connected components. Parallel architectures offer a promising method for speeding up these image processing operations. Local operations are easy to parallelize, because the input data can be divided among processors, processed in parallel separately, then the outputs can be combined by concatenation. Global operations are harder to parallelize. In fact, some global operations cannot be executed in parallel; it is possible for a global operation to require serial execution for correct computation of the result. However, an important class of global operations, namely those that are reversible-that can be computed in forward or reverse order on a data structure-can be computed in parallel using a restricted form of divide and conquer called split and merge. These reversible operations include the global operations mentioned above, and many more besides-even such non-image processing operations as parsing, string search, and sorting. The split and merge method will be illustrated by application of it to these algorithms. Performance analysis of the method on different architectures-one-dimensional, two-dimensional, and binary tree processor arrays will be demonstrated.

  6. Abstract machine based execution model for computer architecture design and efficient implementation of logic programs in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Hermenegildo, M.V.

    1986-01-01

    The term Logic Programming refers to a variety of computer languages and execution models based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in artificial intelligence, knowledge-based systems, and many other areas of computing. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an Abstract Machine level, suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and, therefore, the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set.

  7. Split-bolus single-phase cardiac multidetector computed tomography for reliable detection of left atrial thrombus. Comparison to transesophageal echocardiography

    Energy Technology Data Exchange (ETDEWEB)

    Staab, W.; Zwaka, P.A.; Sohns, J.M.; Schwarz, A.; Lotz, J. [University Medical Center Goettingen Univ. (Germany). Inst. for Diagnostic and Interventional Radiology; Sohns, C.; Vollmann, D.; Zabel, M.; Hasenfuss, G. [Goettingen Univ. (Germany). Dept. of Cardiology and Pneumology; Schneider, S. [Goettingen Univ. (Germany). Dept. of Medical Statistics

    2014-11-15

    Evaluation of a new cardiac MDCT protocol using a split-bolus contrast injection protocol and single MDCT scan for reliable diagnosis of LA/LAA thrombi in comparison to TEE, optimizing radiation exposure and use of contrast agent. A total of 182 consecutive patients with drug refractory AF scheduled for PVI (62.6% male, mean age: 64.1 ± 10.2 years) underwent routine diagnostic work including TEE and cardiac MDCT for the evaluation of LA/LAA anatomy and thrombus formation between November 2010 and March 2012. Contrast media injection was split into a pre-bolus of 30 ml and main bolus of 70 ml iodinated contrast agent separated by a short time delay. In this study, split-bolus cardiac MDCT identified 14 of 182 patients with filling defects of the LA/LAA. In all of these 14 patients, abnormalities were found in TEE. All 5 of the 14 patients with thrombus formation in cardiac MDCT were confirmed by TEE. MDCT was 100% accurate for thrombus, with strong but not perfect overall results for SEC equivalent on MDCT.

  8. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition, the emotional status of a human such as anger, fear, happiness etc. are determined based on the speech signals. Human-computer interaction...actors (five male and five female ) recorded 800 utterances. Ten different daily used German sentences were recorded in seven different emotional...k ≤ ci 2(di−k) (di−bi)−(di−ci) , if ci ≤ k ≤ di 0, otherwise (5) where i is the index of the filter, Hi is the response of the ith filter. bi, ci and

  9. A control unit for a laser module of optoelectronic computing environment with dynamic architecture

    Directory of Open Access Journals (Sweden)

    Lipinskii A. Y.

    2013-06-01

    Full Text Available The paper presents the developed control unit of laser modules of optoelectronic acousto-optic computing environment. The unit is based on ARM micro¬con¬troller of Cortex M3 family, and allows alternating between recording (erase and reading modes in accordance with a predetermined algorithm and settings — exposure time and intensity. The principal electric circuit of the presented device, the block diagram of microcontroller algorithm, and the example application of the developed control unit in the layout of the experimental setup are provided.

  10. Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Fengguang [Univ. of Tennessee, Knoxville, TN (United States); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-06-01

    We present a new methodology for utilizing all CPU cores and all GPUs on a heterogeneous multicore and multi-GPU system to support matrix computations e ciently. Our approach is able to achieve the objectives of a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our main idea is to treat the heterogeneous system as a distributed-memory machine, and to use a heterogeneous 1-D block cyclic distribution to allocate data to the host system and GPUs to minimize communication. We have designed heterogeneous algorithms with two di erent tile sizes (one for CPU cores and the other for GPUs) to cope with processor heterogeneity. We propose an auto-tuning method to determine the best tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our experiments on a compute node with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs demonstrate good weak scalability, strong scalability, load balance, and e ciency of our approach.

  11. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  12. Computational Architecture of the Granular Layer of Cerebellum-Like Structures.

    Science.gov (United States)

    Bratby, Peter; Sneyd, James; Montgomery, John

    2017-02-01

    In the adaptive filter model of the cerebellum, the granular layer performs a recoding which expands incoming mossy fibre signals into a temporally diverse set of basis signals. The underlying neural mechanism is not well understood, although various mechanisms have been proposed, including delay lines, spectral timing and echo state networks. Here, we develop a computational simulation based on a network of leaky integrator neurons, and an adaptive filter performance measure, which allows candidate mechanisms to be compared. We demonstrate that increasing the circuit complexity improves adaptive filter performance, and relate this to evolutionary innovations in the cerebellum and cerebellum-like structures in sharks and electric fish. We show how recurrence enables an increase in basis signal duration, which suggest a possible explanation for the explosion in granule cell numbers in the mammalian cerebellum.

  13. Truth in advertising: Reporting performance of computer programs, algorithms and the impact of architecture

    Directory of Open Access Journals (Sweden)

    Scott Hazelhurst

    2010-11-01

    Full Text Available The level of detail and precision that appears in the experimental methodology section computer science papers is usually much less than in natural science disciplines. This is partially justified by different nature of experiments. The experimental evidence presented here shows that the time taken by the same algorithm varies so significantly on different CPUs that without knowing the exact model of CPU, it is difficult to compare the results. This is placed in context by analysing a cross-section of experimental results reported in the literature. The reporting of experimental results is sometimes insufficient to allow experiments to be replicated, and in some case is insufficient to support the claims made for the algorithms. New standards for reporting on algorithms results are suggested.

  14. The grammar of anger: Mapping the computational architecture of a recalibrational emotion.

    Science.gov (United States)

    Sell, Aaron; Sznycer, Daniel; Al-Shawaf, Laith; Lim, Julian; Krauss, Andre; Feldman, Aneta; Rascanu, Ruxandra; Sugiyama, Lawrence; Cosmides, Leda; Tooby, John

    2017-11-01

    According to the recalibrational theory of anger, anger is a computationally complex cognitive system that evolved to bargain for better treatment. Anger coordinates facial expressions, vocal changes, verbal arguments, the withholding of benefits, the deployment of aggression, and a suite of other cognitive and physiological variables in the service of leveraging bargaining position into better outcomes. The prototypical trigger of anger is an indication that the offender places too little weight on the angry individual's welfare when making decisions, i.e. the offender has too low a welfare tradeoff ratio (WTR) toward the angry individual. Twenty-three experiments in six cultures, including a group of foragers in the Ecuadorian Amazon, tested six predictions about the computational structure of anger derived from the recalibrational theory. Subjects judged that anger would intensify when: (i) the cost was large, (ii) the benefit the offender received from imposing the cost was small, or (iii) the offender imposed the cost despite knowing that the angered individual was the person to be harmed. Additionally, anger-based arguments conformed to a conceptual grammar of anger, such that offenders were inclined to argue that they held a high WTR toward the victim, e.g., "the cost I imposed on you was small", "the benefit I gained was large", or "I didn't know it was you I was harming." These results replicated across all six tested cultures: the US, Australia, Turkey, Romania, India, and Shuar hunter-horticulturalists in Ecuador. Results contradict key predictions about anger based on equity theory and social constructivism. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Comparative Analysis of Stability to Induced Deadlocks for Computing Grids with Various Node Architectures

    Directory of Open Access Journals (Sweden)

    Tatiana R. Shmeleva

    2018-01-01

    Full Text Available In this paper, we consider the classification and applications of switching methods, their advantages and disadvantages. A model of a computing grid was constructed in the form of a colored Petri net with a node which implements cut-through packet switching. The model consists of packet switching nodes, traffic generators and guns that form malicious traffic disguised as usual user traffic. The characteristics of the grid model were investigated under a working load with different intensities. The influence of malicious traffic such as traffic duel was estimated on the quality of service parameters of the grid. A comparative analysis of the computing grids stability was carried out with nodes which implement the store-and-forward and cut-through switching technologies. It is shown that the grids performance is approximately the same under work load conditions, and under peak load conditions the grid with the node implementing the store-and-forward technology is more stable. The grid with nodes implementing SAF technology comes to a complete deadlock through an additional load which is less than 10 percent. After a detailed study, it is shown that the traffic duel configuration does not affect the grid with cut-through nodes if the workload is increases to the peak load, at which the grid comes to a complete deadlock. The execution intensity of guns which generate a malicious traffic is determined by a random function with the Poisson distribution. The modeling system CPN Tools is used for constructing models and measuring parameters. Grid performance and average package delivery time are estimated in the grid on various load options.

  16. Computational Nanophotonics: modeling optical interactions and transport in tailored nanosystem architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schatz, George [Northwestern Univ., Evanston, IL (United States); Ratner, Mark [Northwestern Univ., Evanston, IL (United States)

    2014-02-27

    This report describes research by George Schatz and Mark Ratner that was done over the period 10/03-5/09 at Northwestern University. This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray, and there were many joint publications as summarized later. In addition, a lot of this work involved collaborations with experimental groups at Northwestern, Argonne, and elsewhere. The research was primarily concerned with developing theory and computational methods that can be used to describe the interaction of light with noble metal nanoparticles (especially silver) that are capable of plasmon excitation. Classical electrodynamics provides a powerful approach for performing these studies, so much of this research project involved the development of methods for solving Maxwell’s equations, including both linear and nonlinear effects, and examining a wide range of nanostructures, including particles, particle arrays, metal films, films with holes, and combinations of metal nanostructures with polymers and other dielectrics. In addition, our work broke new ground in the development of quantum mechanical methods to describe plasmonic effects based on the use of time dependent density functional theory, and we developed new theory concerned with the coupling of plasmons to electrical transport in molecular wire structures. Applications of our technology were aimed at the development of plasmonic devices as components of optoelectronic circuits, plasmons for spectroscopy applications, and plasmons for energy-related applications.

  17. Organization of the Mammalian Locomotor CPG: Review of Computational Model and Circuit Architectures Based on Genetically Identified Spinal Interneurons

    Science.gov (United States)

    Dougherty, Kimberly J.; Shevtsova, Natalia A.

    2015-01-01

    Abstract The organization of neural circuits that form the locomotor central pattern generator (CPG) and provide flexor–extensor and left–right coordination of neuronal activity remains largely unknown. However, significant progress has been made in the molecular/genetic identification of several types of spinal interneurons, including V0 (V0D and V0V subtypes), V1, V2a, V2b, V3, and Shox2, among others. The possible functional roles of these interneurons can be suggested from changes in the locomotor pattern generated in mutant mice lacking particular neuron types. Computational modeling of spinal circuits may complement these studies by bringing together data from different experimental studies and proposing the possible connectivity of these interneurons that may define rhythm generation, flexor–extensor interactions on each side of the cord, and commissural interactions between left and right circuits. This review focuses on the analysis of potential architectures of spinal circuits that can reproduce recent results and suggest common explanations for a series of experimental data on genetically identified spinal interneurons, including the consequences of their genetic ablation, and provides important insights into the organization of the spinal CPG and neural control of locomotion. PMID:26478909

  18. An AmI-Based Software Architecture Enabling Evolutionary Computation in Blended Commerce: The Shopping Plan Application

    Directory of Open Access Journals (Sweden)

    Giuseppe D’Aniello

    2015-01-01

    Full Text Available This work describes an approach to synergistically exploit ambient intelligence technologies, mobile devices, and evolutionary computation in order to support blended commerce or ubiquitous commerce scenarios. The work proposes a software architecture consisting of three main components: linked data for e-commerce, cloud-based services, and mobile apps. The three components implement a scenario where a shopping mall is presented as an intelligent environment in which customers use NFC capabilities of their smartphones in order to handle e-coupons produced, suggested, and consumed by the abovesaid environment. The main function of the intelligent environment is to help customers define shopping plans, which minimize the overall shopping cost by looking for best prices, discounts, and coupons. The paper proposes a genetic algorithm to find suboptimal solutions for the shopping plan problem in a highly dynamic context, where the final cost of a product for an individual customer is dependent on his previous purchases. In particular, the work provides details on the Shopping Plan software prototype and some experimentation results showing the overall performance of the genetic algorithm.

  19. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  20. Split Cord Malformations

    Directory of Open Access Journals (Sweden)

    Yurdal Gezercan

    2015-06-01

    Full Text Available Split cord malformations are rare form of occult spinal dysraphism in children. Split cord malformations are characterized by septum that cleaves the spinal canal in sagittal plane within the single or duplicated thecal sac. Although their precise incidence is unknown, split cord malformations are exceedingly rare and represent %3.8-5 of all congenital spinal anomalies. Characteristic neurological, urological, orthopedic clinical manifestations are variable and asymptomatic course is possible. Earlier diagnosis and surgical intervention for split cord malformations is associated with better long-term fuctional outcome. For this reason, diagnostic imaging is indicated for children with associated cutaneous and orthopedic signs. Additional congenital anomalies usually to accompany the split cord malformations. Earlier diagnosis, meticuolus surgical therapy and interdisciplinary careful evaluation and follow-up should be made for good prognosis. [Cukurova Med J 2015; 40(2.000: 199-207

  1. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  2. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    Science.gov (United States)

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  3. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  4. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  5. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  6. Split Malcev algebras

    Indian Academy of Sciences (India)

    project of the Spanish Ministerio de Educación y Ciencia MTM2007-60333. References. [1] Calderón A J, On split Lie algebras with symmetric root systems, Proc. Indian. Acad. Sci (Math. Sci.) 118(2008) 351–356. [2] Calderón A J, On split Lie triple systems, Proc. Indian. Acad. Sci (Math. Sci.) 119(2009). 165–177.

  7. BLAST in Gid (BiG): A Grid-Enabled Software Architecture and Implementation of Parallel and Sequential BLAST

    International Nuclear Information System (INIS)

    Aparicio, G.; Blanquer, I.; Hernandez, V.; Segrelles, D.

    2007-01-01

    The integration of High-performance computing tools is a key issue in biomedical research. Many computer-based applications have been migrated to High-Performance computers to deal with their computing and storage needs such as BLAST. However, the use of clusters and computing farm presents problems in scalability. The use of a higher layer of parallelism that splits the task into highly independent long jobs that can be executed in parallel can improve the performance maintaining the efficiency. Grid technologies combined with parallel computing resources are an important enabling technology. This work presents a software architecture for executing BLAST in a International Grid Infrastructure that guarantees security, scalability and fault tolerance. The software architecture is modular an adaptable to many other high-throughput applications, both inside the field of bio computing and outside. (Author)

  8. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    Science.gov (United States)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency

  9. An Experiment in Architectural Instruction.

    Science.gov (United States)

    Dvorak, Robert W.

    1978-01-01

    Discusses the application of the PLATO IV computer-based educational system to a one-semester basic drawing course for freshman architecture, landscape architecture, and interior design students and relates student reactions to the experience. (RAO)

  10. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...... the obligation to prepare students to perform in a profession that is largely defined by forces outside that discipline. It will be proposed that the autonomy of architecture can be understood as a unique kind of information: as architecture’s self-reliance or knowledge-about itself. A knowledge...... that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...

  11. Architectural Drawing

    DEFF Research Database (Denmark)

    Steinø, Nicolai

    2018-01-01

    without being able to visualize it in drawing. Architectural design, in other words, to a large extent happens through drawing. Hence, to neglect drawing skills is to neglect an important capacity to create architectural design. While the current-day argument for the depreciation of drawing skills...... is that computers can represent graphic ideas both faster and better than most medium-skilled draftsmen, drawing in design is not only about representing final designs. In fact, several steps involving the capacity to draw lie before the representation of a final design. Not only is drawing skills an important...... prerequisite for learning about the nature of existing objects and spaces, and thus to build a vocabulary of design. It is also a prerequisite for both reflecting and communicating about design ideas. In this paper, a taxonomy of notation, reflection, communication and presentation drawing is presented...

  12. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    Directory of Open Access Journals (Sweden)

    Kui Liu

    2017-02-01

    Full Text Available This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI. More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©. The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs. The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  13. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    Science.gov (United States)

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  14. Implementation of a cell-wise block-Gauss-Seidel iterative method for SN transport on a hybrid parallel computer architecture

    International Nuclear Information System (INIS)

    Rosa, Massimiliano; Warsa, James S.; Perks, Michael

    2011-01-01

    We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S n transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine™ (Cell/B.E.) 1 . LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S n angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S n transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems. (author)

  15. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  16. Aspects of Split Supersymmetry

    CERN Document Server

    Arkani-Hamed, N; Giudice, Gian Francesco; Romanino, A

    2005-01-01

    We explore some fundamental differences in the phenomenology, cosmology and model building of Split Supersymmetry compared with traditional low-scale supersymmetry. We show how the mass spectrum of Split Supersymmetry naturally emerges from theories where the dominant source of supersymmetry breaking preserves an $R$ symmetry, characterize the class of theories where the unavoidable $R$-breaking by gravity can be neglected, and point out a new possibility, where supersymmetry breaking is directly communicated at tree level to the visible sector via renormalizable interactions. Next, we discuss possible low-energy signals for Split Supersymmetry. The absence of new light scalars removes all the phenomenological difficulties of low-energy supersymmetry, associated with one-loop flavor and CP violating effects. However, the electric dipole moments of leptons and quarks do arise at two loops, and are automatically at the level of present limits with no need for small phases, making them accessible to several ongo...

  17. The ATLAS Analysis Architecture

    International Nuclear Information System (INIS)

    Cranmer, K.S.

    2008-01-01

    We present an overview of the ATLAS analysis architecture including the relevant aspects of the computing model and the major architectural aspects of the Athena framework. Emphasis will be given to the interplay between the analysis use cases and the technical aspects of the architecture including the design of the event data model, transient-persistent separation, data reduction strategies, analysis tools, and ROOT interoperability

  18. Split Malcev algebras

    Indian Academy of Sciences (India)

    We study the structure of split Malcev algebras of arbitrary dimension over an algebraically closed field of characteristic zero. We show that any such algebras is of the form M = U + ∑ j I j with U a subspace of the abelian Malcev subalgebra and any I j a well described ideal of satisfying [ I j , I k ] = 0 if ≠ .

  19. Splitting of Comets

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 1. Splitting of Comets. Utpal Mukhopadhyay. General Article Volume 7 Issue 1 January 2002 pp 11-22. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/007/01/0011-0022. Keywords. Cometary ...

  20. Splitting Strategy for Simulating Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Xiong You

    2014-01-01

    Full Text Available The splitting approach is developed for the numerical simulation of genetic regulatory networks with a stable steady-state structure. The numerical results of the simulation of a one-gene network, a two-gene network, and a p53-mdm2 network show that the new splitting methods constructed in this paper are remarkably more effective and more suitable for long-term computation with large steps than the traditional general-purpose Runge-Kutta methods. The new methods have no restriction on the choice of stepsize due to their infinitely large stability regions.

  1. Evaluation of the accuracy of linear measurements on multi-slice and cone beam computed tomography scans to detect the mandibular canal during bilateral sagittal split osteotomy of the mandible.

    Science.gov (United States)

    Freire-Maia, B; Machado, V deC; Valerio, C S; Custódio, A L N; Manzi, F R; Junqueira, J L C

    2017-03-01

    The aim of this study was to compare the accuracy of linear measurements of the distance between the mandibular cortical bone and the mandibular canal using 64-detector multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT). It was sought to evaluate the reliability of these examinations in detecting the mandibular canal for use in bilateral sagittal split osteotomy (BSSO) planning. Eight dry human mandibles were studied. Three sites, corresponding to the lingula, the angle, and the body of the mandible, were selected. After the CT scans had been obtained, the mandibles were sectioned and the bone segments measured to obtain the actual measurements. On analysis, no statistically significant difference was found between the measurements obtained through MSCT and CBCT, or when comparing the measurements from these scans with the actual measurements. It is concluded that the images obtained by CT scan, both 64-detector multi-slice and cone beam, can be used to obtain accurate linear measurements to locate the mandibular canal for preoperative planning of BSSO. The ability to correctly locate the mandibular canal during BSSO will reduce the occurrence of neurosensory disturbances in the postoperative period. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. CMEIAS JFrad: a digital computing tool to discriminate the fractal geometry of landscape architectures and spatial patterns of individual cells in microbial biofilms.

    Science.gov (United States)

    Ji, Zhou; Card, Kyle J; Dazzo, Frank B

    2015-04-01

    Image analysis of fractal geometry can be used to gain deeper insights into complex ecophysiological patterns and processes occurring within natural microbial biofilm landscapes, including the scale-dependent heterogeneities of their spatial architecture, biomass, and cell-cell interactions, all driven by the colonization behavior of optimal spatial positioning of organisms to maximize their efficiency in utilization of allocated nutrient resources. Here, we introduce CMEIAS JFrad, a new computing technology that analyzes the fractal geometry of complex biofilm architectures in digital landscape images. The software uniquely features a data-mining opportunity based on a comprehensive collection of 11 different mathematical methods to compute fractal dimension that are implemented into a wizard design to maximize ease-of-use for semi-automatic analysis of single images or fully automatic analysis of multiple images in a batch process. As examples of application, quantitative analyses of fractal dimension were used to optimize the important variable settings of brightness threshold and minimum object size in order to discriminate the complex architecture of freshwater microbial biofilms at multiple spatial scales, and also to differentiate the spatial patterns of individual bacterial cells that influence their cooperative interactions, resource use, and apportionment in situ. Version 1.0 of JFrad is implemented into a software package containing the program files, user manual, and tutorial images that will be freely available at http://cme.msu.edu/cmeias/. This improvement in computational image informatics will strengthen microscopy-based approaches to analyze the dynamic landscape ecology of microbial biofilm populations and communities in situ at spatial resolutions that range from single cells to microcolonies.

  3. OS Friendly Microprocessor Architecture

    Science.gov (United States)

    2017-04-01

    have developed a computer architecture that reduces the high cost of a context switch and provides hardware-based computer security. A context switch...code to jump to a computer virus or other malware application. Caller ID does not have any authentication. A prank caller can easily spoof Caller ID...

  4. A three operator split-step method covering a larger set of non-linear partial differential equations

    Science.gov (United States)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  5. Fuzzy split and merge for shadow detection

    Directory of Open Access Journals (Sweden)

    Remya K. Sasi

    2015-03-01

    Full Text Available Presence of shadow in an image often causes problems in computer vision applications such as object recognition and image segmentation. This paper proposes a method to detect the shadow from a single image using fuzzy split and merge approach. Split and merge is a classical algorithm used in image segmentation. Predicate function in the classical approach is replaced by a Fuzzy predicate in the proposed approach. The method follows a top down approach of recursively splitting an image into homogeneous quadtree blocks, followed by a bottom up approach by merging adjacent unique regions. The method has been compared with previous approaches and found to be better in performance in terms of accuracy.

  6. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...

  7. An Analysis Of Methods For Sharing An Electronic Platform Of Public Administration Services Using Cloud Computing And Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Maciej Hamiga

    2012-01-01

    Full Text Available This paper presents a case study on how to design and implement a publicadministration services platform, using the SOA paradigm and cloud model forsharing among citizens belonging to particular districts and provinces, providingtight integration with an existing ePUAP system. The basic requirements,architecture and implementation of the platform are all discussed. Practicalevaluation of the solution is elaborated using real-case scenario of the BusinessProcess Management related activities.

  8. Northeast Artificial Intelligence Consortium Annual Report 1986. Volume 6. Part A. Computer Architectures for Very Large Knowledge Bases

    Science.gov (United States)

    1988-06-01

    toeuly work in each of the arguments through the same below: informaton reneval . Other Items, uh as sao1utur hashing function and OR their fixed- r(a...the incident light energy and, second, that the low duty cycle required if these devices are to Interface with "slow" electronics greatly reduces the...Nevertheless, switching has been achieved at energies comparable to electronics and this may eventually put electronics in a disadvantage. Architectural

  9. Habits of Mind and the Split-Mind Effect: When Computer-Assisted Qualitative Data Analysis Software is Used in Phenomenological Research

    Directory of Open Access Journals (Sweden)

    Erika Goble

    2012-03-01

    Full Text Available When Marshall McLUHAN famously stated "the medium is the message," he was echoing Martin HEIDEGGER's assertion that through our use of technology we can become functions of it. Therefore, how does adopting computer-assisted qualitative data analysis software affect our research activities and, more importantly, our conception of research? These questions are explored by examining the influence NVivo had upon an interdisciplinary phenomenological research project in health ethics. We identify the software's effects and situate our decision to use it within the Canadian health sciences research landscape. We also explore the challenges of remaining true to our project's philosophical foundations, as well as how NVivo altered our being-in-the-world as researchers. This case demonstrates McLUHAN's claim that new technologies invariably initiate new practices and modes of being, and urges researchers to attend to how we are both shaping and being shaped by software. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs120227

  10. Split warhead simultaneous impact

    Directory of Open Access Journals (Sweden)

    Rahul Singh Dhari

    2017-12-01

    Full Text Available A projectile system is proposed to improve efficiency and effectiveness of damage done by anti-tank weapon system on its target by designing a ballistic projectile that can split into multiple warheads and engage a target at the same time. This idea has been developed in interest of saving time consumed from the process of reloading and additional number of rounds wasted on target during an attack. The proposed system is achieved in three steps: Firstly, a mathematical model is prepared using the basic equations of motion. Second, An Ejection Mechanism of proposed warhead is explained with the help of schematics. Third, a part of numerical simulation which is done using the MATLAB software. The final result shows various ranges and times when split can be effectively achieved. With the new system, impact points are increased and hence it has a better probability of hitting a target.

  11. Computing competition for light in the GREENLAB model of plant growth: a contribution to the study of the effects of density on resource acquisition and architectural development.

    Science.gov (United States)

    Cournède, Paul-Henry; Mathieu, Amélie; Houllier, François; Barthélémy, Daniel; de Reffye, Philippe

    2008-05-01

    The dynamical system of plant growth GREENLAB was originally developed for individual plants, without explicitly taking into account interplant competition for light. Inspired by the competition models developed in the context of forest science for mono-specific stands, we propose to adapt the method of crown projection onto the x-y plane to GREENLAB, in order to study the effects of density on resource acquisition and on architectural development. The empirical production equation of GREENLAB is extrapolated to stands by computing the exposed photosynthetic foliage area of each plant. The computation is based on the combination of Poisson models of leaf distribution for all the neighbouring plants whose crown projection surfaces overlap. To study the effects of density on architectural development, we link the proposed competition model to the model of interaction between functional growth and structural development introduced by Mathieu (2006, PhD Thesis, Ecole Centrale de Paris, France). The model is applied to mono-specific field crops and forest stands. For high-density crops at full cover, the model is shown to be equivalent to the classical equation of field crop production (Howell and Musick, 1985, in Les besoins en eau des cultures; Paris: INRA Editions). However, our method is more accurate at the early stages of growth (before cover) or in the case of intermediate densities. It may potentially account for local effects, such as uneven spacing, variation in the time of plant emergence or variation in seed biomass. The application of the model to trees illustrates the expression of plant plasticity in response to competition for light. Density strongly impacts on tree architectural development through interactions with the source-sink balances during growth. The effects of density on tree height and radial growth that are commonly observed in real stands appear as emerging properties of the model.

  12. Implementation of a cell-wise Block-Gauss-Seidel iterative method for SN transport on a hybrid parallel computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Massimiliano [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Perks, Michael [Los Alamos National Laboratory

    2010-12-14

    We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S{sub n} transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine{trademark} (Cell/B.E.). LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S{sub n} angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S{sub n} transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems.

  13. On split Lie triple systems

    Indian Academy of Sciences (India)

    We also introduced in [1] techniques of connection of roots in the framework of split Lie algebras. In the present paper we extend these techniques to the framework of split Lie triple systems so as to obtain a generalization of the results in [1]. We consider the wide class of split Lie triple systems (which contains the class of.

  14. A splitting algorithm for directional regularization and sparsification

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Nielsen, Mads

    2012-01-01

    We present a new split-type algorithm for the minimization of a p-harmonic energy with added data fidelity term. The half-quadratic splitting reduces the original problem to two straightforward problems, that can be minimized efficiently. The minimizers to the two sub-problems can typically...... be computed pointwise and are easily implemented on massively parallel processors. Furthermore the splitting method allows for the computation of solutions to a large number of more advanced directional regularization problems. In particular we are able to handle robust, non-convex data terms, and to define...

  15. Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program. Volume 2, Interim business systems guidance

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    As part of the Environmental Restoration Program at Martin Marietta, IEM (Information Engineering Methodology) was developed as part of a complete and integrated approach to the progressive development and subsequent maintenance of automated data sharing systems. This approach is centered around the organization`s objectives, inherent data relationships, and business practices. IEM provides the Information Systems community with a tool kit of disciplined techniques supported by automated tools. It includes seven stages: Information Strategy Planning; Business Area Analysis; Business System Design; Technical Design; Construction; Transition; Production. This document focuses on the Business Systems Architecture.

  16. Microprocessors & their operating systems a comprehensive guide to 8, 16 & 32 bit hardware, assembly language & computer architecture

    CERN Document Server

    Holland, R C

    1989-01-01

    Provides a comprehensive guide to all of the major microprocessor families (8, 16 and 32 bit). The hardware aspects and software implications are described, giving the reader an overall understanding of microcomputer architectures. The internal processor operation of each microprocessor device is presented, followed by descriptions of the instruction set and applications for the device. Software considerations are expanded with descriptions and examples of the main high level programming languages (BASIC, Pascal and C). The book also includes detailed descriptions of the three main operatin

  17. Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program. Volume 2, Interim business systems guidance

    International Nuclear Information System (INIS)

    1994-09-01

    As part of the Environmental Restoration Program at Martin Marietta, IEM (Information Engineering Methodology) was developed as part of a complete and integrated approach to the progressive development and subsequent maintenance of automated data sharing systems. This approach is centered around the organization's objectives, inherent data relationships, and business practices. IEM provides the Information Systems community with a tool kit of disciplined techniques supported by automated tools. It includes seven stages: Information Strategy Planning; Business Area Analysis; Business System Design; Technical Design; Construction; Transition; Production. This document focuses on the Business Systems Architecture

  18. Computer-Supported Collaborative Problem-Based Learning: An Instructional Design Architecture for Virtual Learning in Nursing Education.

    Science.gov (United States)

    Naidu, Som; Oliver, Mary

    1996-01-01

    Describes a course developed at the University of Southern Queensland (Australia) that used problem-based learning within a computer-supported collaborative environment to give undergraduate nursing students practice in developing decision-making skills. The use of computer-mediated communication is also discussed. (LRW)

  19. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders......' concerns with respect to a system under development. An architectural prototype is primarily a learning and communication vehicle used to explore and experiment with alternative architectural styles, features, and patterns in order to balance different architectural qualities. The use of architectural...... prototypes in the development process is discussed, and we argue that such prototypes can play a role throughout the entire process. The use of architectural prototypes is illustrated by three distinct cases of creating software systems. We argue that architectural prototyping can provide key insights...

  20. Architectural Prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders......' concerns with respect to a system under development. An architectural prototype is primarily a learning and communication vehicle used to explore and experiment with alternative architectural styles, features, and patterns in order to balance different architectural qualities. The use of architectural...... prototypes in the development process is discussed, and we argue that such prototypes can play a role throughout the entire process. The use of architectural prototypes is illustrated by three distinct cases of creating software systems. We argue that architectural prototyping can provide key insights...

  1. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  2. CLOUD ARCHITECTURE FOR LOGISTIC SERVICES

    OpenAIRE

    Jerzy Korczak; Piotr Lipiñski

    2013-01-01

    This paper concerns organization of the local cloud computing environment at the Wroclaw University of Economics, developed in the framework of the LOGICAL research project. In particular the architecture of the environment, the implementation of main components of the environment are described as well as their references to the global cloud computing environment and the general architecture of the VMWare platform.

  3. Computational analysis of the domain architecture and substrate-gating mechanism of prolyl oligopeptidases from Shewanella woodyi and identification probable lead molecules.

    Science.gov (United States)

    Patil, Priya; Skariyachan, Sinosh; Mutt, Eshita; Kaushik, Swati

    2015-02-06

    Prolyl oligopeptidases (POP) are serine proteases found in prokaryotes and eukaryotes which hydrolyze the peptide bond containing proline. The current study focuses on the analysis of POP sequences, their distribution and domain architecture in Shewanella woodyi, a Gram negative, luminous bacterium which causes celiac sprue and similar infections in marine organisms. The POP undergoes huge inter-domain movement, which allows possible route for the entry of any substrate. Hence, it offers an opportunity to understand the mechanism of substrate gating by studying the domain architecture and possibility to identify a probable drug target. In the present study, the POP sequence was retrieved from GenBank data base and the best homologous templates were identified by PSI-BLAST search. The three dimensional structures of the closed and open forms of POP from Shewanella woodyi, which are not available in native form, was generated by homology modeling. The ideal lead molecules were screened by computer aided virtual screening and the binding potential of the best leads towards the target was studied by molecular docking. The domain architecture of the POP revealed that, it has a propeller domain consist of β-sheets, surrounded by α-helices and α/β hydrolase domain with catalytic triad containing Ser-564, Asp-646 and His-681. The hypothetical models of open and closed POP showed backbone RMSD value of 0.56 Å and 0.65 Å respectively. Ramachandran plot of the open and closed POP conformations accounts for 99.4% and 98.7% residues in the favoured region respectively. Our study revealed that, propeller domain comes as an insert between N-terminal and C-terminal α/β hydrolase domain. Molecular docking, drug likeliness properties and ADME prediction suggested that KUC-103481N and Pramiracetum can be used as probable lead molecules towards the POP from Shewanella woodyi.

  4. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations

    International Nuclear Information System (INIS)

    Dubois, J.

    2011-01-01

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [fr

  5. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  6. Architecture and Implementation of a Scalable Sensor Data Storage and Analysis System Using Cloud Computing and Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Galip Aydin

    2015-01-01

    Full Text Available Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.

  7. Architecture of a Dual-Modality, High-Resolution, Fully Digital Positron Emission Tomography/Computed Tomography (PET/CT) Scanner for Small Animal Imaging

    Science.gov (United States)

    Fontaine, R.; Belanger, F.; Cadorette, J.; Leroux, J.-D.; Martin, J.-P.; Michaud, J.-B.; Pratte, J.-F.; Robert, S.; Lecomte, R.

    2005-06-01

    Contemporary positron emission tomography (PET) scanners are commonly implemented with very large scale integration analog front-end electronics to reduce power consumption, space, noise, and cost. Analog processing yields excellent results in dedicated applications, but offers little flexibility for sophisticated signal processing or for more accurate measurements with newer, fast scintillation crystals. Design goals of the new Sherbrooke PET/computed tomography (CT) scanner are: 1) to achieve 1 mm resolution in both emission (PET) and transmission (CT) imaging using the same detector channels; 2) to be able to count and discriminate individual X-ray photons in CT mode. These requirements can be better met by sampling the analog signal from each individual detector channel as early as possible, using off-the-shelf, 8-b, 100-MHz, high-speed analog-to-digital converters (ADC) and digital processing in field programmable gate arrays (FPGAs). The core of the processing units consists of Xilinx SpartanIIe that can hold up to 16 individual channels. The initial architecture is designed for 1024 channels, but modularity allows extending the system up to 10 K channels or more. This parallel architecture supports count rates in excess of a million hits/s/scintillator in CT mode and up to 100 K events/s/scintillator in PET mode, with a coincidence time window of less than 10 ns full-width at half-maximum.

  8. High-Speed Photonic Reservoir Computing Using a Time-Delay-Based Architecture: Million Words per Second Classification

    Directory of Open Access Journals (Sweden)

    Laurent Larger

    2017-02-01

    Full Text Available Reservoir computing, originally referred to as an echo state network or a liquid state machine, is a brain-inspired paradigm for processing temporal information. It involves learning a “read-out” interpretation for nonlinear transients developed by high-dimensional dynamics when the latter is excited by the information signal to be processed. This novel computational paradigm is derived from recurrent neural network and machine learning techniques. It has recently been implemented in photonic hardware for a dynamical system, which opens the path to ultrafast brain-inspired computing. We report on a novel implementation involving an electro-optic phase-delay dynamics designed with off-the-shelf optoelectronic telecom devices, thus providing the targeted wide bandwidth. Computational efficiency is demonstrated experimentally with speech-recognition tasks. State-of-the-art speed performances reach one million words per second, with very low word error rate. Additionally, to record speed processing, our investigations have revealed computing-efficiency improvements through yet-unexplored temporal-information-processing techniques, such as simultaneous multisample injection and pitched sampling at the read-out compared to information “write-in”.

  9. Long-term Risedronate Treatment Normalizes Mineralization and Continues to Preserve Trabecular Architecture: Sequential Triple Biopsy Studies with Micro-Computed Tomography

    International Nuclear Information System (INIS)

    Borah, B.; Dufresne, T.; Ritman, E.; Jorgensen, S.; Liu, S.; Chmielewski, P.; Phipps, R.; Zhou, X.; Sibonga, J.; Turner, R.

    2006-01-01

    The objective of the study was to assess the time course of changes in bone mineralization and architecture using sequential triple biopsies from women with postmenopausal osteoporosis (PMO) who received long-term treatment with risedronate. Transiliac biopsies were obtained from the same subjects (n = 7) at baseline and after 3 and 5 years of treatment with 5 mg daily risedronate. Mineralization was measured using 3-dimensional (3D) micro-computed tomography (CT) with synchrotron radiation and was compared to levels in healthy premenopausal women (n = 12). Compared to the untreated PMO women at baseline, the premenopausal women had higher average mineralization (Avg-MIN) and peak mineralization (Peak-MIN) by 5.8% (P = 0.003) and 8.0% (P = 0.003), respectively, and lower ratio of low to high-mineralized bone volume (BMR-V) and surface area (BMR-S) by 73.3% (P = 0.005) and 61.7% (P 0.003), respectively. Relative to baseline, 3 years of risedronate treatment significantly increased Avg-MIN (4.9 ± 1.1%, P = 0.016) and Peak-MIN (6.2 ± 1.5%, P = 0.016), and significantly decreased BMR-V (-68.4 ± 7.3%, P = 0.016) and BMR-S (-50.2 ± 5.7%, P = 0.016) in the PMO women. The changes were maintained at the same level when treatment was continued up to 5 years. These results are consistent with the significant reduction of turnover observed after 3 years of treatment and which was similarly maintained through 5 years of treatment. Risedronate restored the degree of mineralization and the ratios of low- to high-mineralized bone to premenopausal levels after 3 years of treatment, suggesting that treatment reduced bone turnover in PMO women to healthy premenopausal levels. Conventional micro-CT analysis further demonstrated that bone volume (BV/TV) and trabecular architecture did not change from baseline up to 5 years of treatment, suggesting that risedronate provided long-term preservation of trabecular architecture in the PMO women. Overall, risedronate provided sustained

  10. Long-term Risedronate Treatment Normalizes Mineralization and Continues to Preserve Trabecular Architecture: Sequential Triple Biopsy Studies with Micro-Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Borah,B.; Dufresne, T.; Ritman, E.; Jorgensen, S.; Liu, S.; Chmielewski, P.; Phipps, R.; Zhou, X.; Sibonga, J.; Turner, R.

    2006-01-01

    The objective of the study was to assess the time course of changes in bone mineralization and architecture using sequential triple biopsies from women with postmenopausal osteoporosis (PMO) who received long-term treatment with risedronate. Transiliac biopsies were obtained from the same subjects (n = 7) at baseline and after 3 and 5 years of treatment with 5 mg daily risedronate. Mineralization was measured using 3-dimensional (3D) micro-computed tomography (CT) with synchrotron radiation and was compared to levels in healthy premenopausal women (n = 12). Compared to the untreated PMO women at baseline, the premenopausal women had higher average mineralization (Avg-MIN) and peak mineralization (Peak-MIN) by 5.8% (P = 0.003) and 8.0% (P = 0.003), respectively, and lower ratio of low to high-mineralized bone volume (BMR-V) and surface area (BMR-S) by 73.3% (P = 0.005) and 61.7% (P = 0.003), respectively. Relative to baseline, 3 years of risedronate treatment significantly increased Avg-MIN (4.9 {+-} 1.1%, P = 0.016) and Peak-MIN (6.2 {+-} 1.5%, P = 0.016), and significantly decreased BMR-V (-68.4 {+-} 7.3%, P = 0.016) and BMR-S (-50.2 {+-} 5.7%, P = 0.016) in the PMO women. The changes were maintained at the same level when treatment was continued up to 5 years. These results are consistent with the significant reduction of turnover observed after 3 years of treatment and which was similarly maintained through 5 years of treatment. Risedronate restored the degree of mineralization and the ratios of low- to high-mineralized bone to premenopausal levels after 3 years of treatment, suggesting that treatment reduced bone turnover in PMO women to healthy premenopausal levels. Conventional micro-CT analysis further demonstrated that bone volume (BV/TV) and trabecular architecture did not change from baseline up to 5 years of treatment, suggesting that risedronate provided long-term preservation of trabecular architecture in the PMO women. Overall, risedronate provided

  11. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  12. Future Details of Architecture

    OpenAIRE

    Garcia, Mark

    2014-01-01

    Despite the exaggerated news of the untimely ′death of the detail′ by Greg Lynn, the architectural detail is now more lifelike and active than ever before. In this era of digital design and production technologies, new materials, parametrics, building information modeling (BIM), augmented realities and the nano–bio–information–computation consilience, the detail is now an increasingly vital force in architecture. Though such digitally designed and produced details are diminishing in size to t...

  13. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  14. Use of a New "Moodle" Module for Improving the Teaching of a Basic Course on Computer Architecture

    Science.gov (United States)

    Trenas, M. A.; Ramos, J.; Gutierrez, E. D.; Romero, S.; Corbera, F.

    2011-01-01

    This paper describes how a new "Moodle" module, called "CTPracticals", is applied to the teaching of the practical content of a basic computer organization course. In the core of the module, an automatic verification engine enables it to process the VHDL designs automatically as they are submitted. Moreover, a straightforward…

  15. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  16. Splitting methods for split feasibility problems with application to Dantzig selectors

    Science.gov (United States)

    He, Hongjin; Xu, Hong-Kun

    2017-05-01

    The split feasibility problem (SFP), which refers to the task of finding a point that belongs to a given nonempty, closed and convex set, and whose image under a bounded linear operator belongs to another given nonempty, closed and convex set, has promising applicability in modeling a wide range of inverse problems. Motivated by the increasingly data-driven regularization in the areas of signal/image processing and statistical learning, in this paper, we study the regularized split feasibility problem (RSFP), which provides a unified model for treating many real-world problems. By exploiting the split nature of the RSFP, we shall gainfully employ several efficient splitting methods to solve the model under consideration. A remarkable advantage of our methods lies in their easier subproblems in the sense that the resulting subproblems have closed-form representations or can be efficiently solved up to a high precision. As an interesting application, we apply the proposed algorithms for finding Dantzig selectors, in addition to demonstrating the effectiveness of the splitting methods through some computational results on synthetic and real medical data sets.

  17. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2010-06-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisa­tion focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means. In this context, this sixth Footprint issue examines the influence of digital means as prag­matic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  18. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2014-07-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisation focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means.In this context, this sixth Footprint issue examines the influence of digital means as pragmatic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  19. Computer simulation of sphenopsid architecture. Part II. Calamites multiramis Weiss, as an example of Late Paleozoic arborescent Sphenopsids.

    Science.gov (United States)

    Daviero; Lecoustre

    2000-04-01

    A late Carboniferous arborescent sphenopsid has been modelled for the first time with the AMAP 1 system. The natural entity consisting of the three form species 'Calamites multiramis/Annularia stellata/Calamostachys tuberculata' (respectively the trunk/branches and foliage/cones) representing the aerial part of this plant is reconstructed and its architecture modelled. The different growth stages are extrapolated, generating a dynamic view that did not exist until now. The model is based on the hypothesis that the modelled part is not preformed but results from the successive production and elongation of internodes. This growth led to old ontogenetic stages of the plant in agreement with Remy and Remy's reconstruction (Remy, W., Remy, R., 1977. Die Floren des Erdaltertums. Verlag Glückauf, Essen, 468 pp.). With its verticillate sterile organs and cone-shaped fructifications similar to the extant herbaceous relative Equisetum, this calamite is distinguished from the latter taxon by having possible 'throw-away' phyllomorphic branches. We assumed the presence of a restricted zone of branches located in the apical part of the trunk. Moreover, the production of reproductive organs that succeeds the vegetative stage implies a major photosynthetic phase associated with a monocarpical form of development of the fossil plant.

  20. Memory architecture for efficient utilization of SDRAM: a case study of the computation/memory access trade-off

    DEFF Research Database (Denmark)

    Gleerup, Thomas Møller; Holten-Lund, Hans Erik; Madsen, Jan

    2000-01-01

    This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i....... to use a memory access strategy with write-only and read-only phases, and a buffering system, which uses round-robin bank write-access combined with burst read-access.......This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i....... In software, forward differencing is usually better, but in this hardware implementation, the trade-off has made it possible to develop a very regular memory architecture with a buffering system, which can reach 95% bandwidth utilization using off-the-shelf SDRAM, This is achieved by changing the algorithm...

  1. Robust Software Architecture for Robots

    Science.gov (United States)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  2. Baseline Architecture of ITER Control System

    Science.gov (United States)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  3. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  4. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  5. Online selection of short-lived particles on many-core computer architectures in the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Zyzak, Maksym

    2016-01-01

    Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles. In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles. The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios. The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores. The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.

  6. Relational Architecture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2018-01-01

    The present study of PhD education and its impact on architectural research singles out three layers of relational architecture. A first layer of relationality appears in a graphic model in which an intimate link between PhD education and architectural research is outlined. The model reflects...... in a scholarly institution (element #3), as well as the certified PhD scholar (element #4) and the architectural profession, notably its labour market (element #5). This first layer outlines the contemporary context which allows architectural research to take place in a dynamic relationship to doctoral education....... A second layer of relational architecture is revealed when one examines the conception of architecture generated in selected PhD dissertations. Focusing on six dissertations with which the author of the present article was involved as a supervisor, the analysis lays bare a series of dynamic...

  7. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  8. Architectures And Algorithms For Digital Optical Computing Systems With Applications To Numerical Transforms And Partial Differential Equations

    Science.gov (United States)

    Drabik, Timothy J.; Title, Mark A.; Lee, Sing H.

    1986-06-01

    The potential and promise of very high-performance spatial light modulators (SLMs) capable of performing logic operations has motivated the investigation of digital computing systems that possess many desirable attributes of optical systems, namely massive parallelism, global communication at high bandwidths, high reliability, many useful degrees of freedom, robustness in the presence of defects, and simplicity. The parallelism of easily realizable optical single-instruction, multiple-data (SIMD) arrays makes them a natural choice for implementation of highly structured algorithms for the numerical solution of multi-dimensional partial differential equations and the computation of fast numerical transforms. A system comprising several SLMs, an optical read/write memory, and a functional block to perform simple, space-invariant shifts on images has enough flexibility to implement the fastest known methods for partial differential equations (e.g. multi-level methods) as well as a wide variety of numerical transforms (e.g., FFT, Walsh-Hadamard transform, rapid transform), in two or more dimensions, and using either fixed or floating-point arithmetic. Performance is projected at greater than 109 floating-point operations/s using SLMs with resolution 1000 x 1000 operating at 1 MHz frame rates.

  9. Split SUSY Radiates Flavor

    CERN Document Server

    Baumgart, Matthew; Zorawski, Thomas

    2014-01-01

    Radiative flavor models where the hierarchies of Standard Model (SM) fermion masses and mixings are explained via loop corrections are elegant ways to solve the SM flavor puzzle. Here we build such a model in the context of Mini-Split Supersymmetry (SUSY) where both flavor and SUSY breaking occur at a scale of 1000 TeV. This model is consistent with the observed Higgs mass, unification, and WIMP dark matter. The high scale allows large flavor mixing among the sfermions, which provides part of the mechanism for radiative flavor generation. In the deep UV, all flavors are treated democratically, but at the SUSY breaking scale, the third, second, and first generation Yukawa couplings are generated at tree level, one loop, and two loops, respectively. Save for one, all the dimensionless parameters in the theory are O(1), with the exception being a modest and technically natural tuning that explains both the smallness of the bottom Yukawa coupling and the largeness of the Cabibbo angle.

  10. How rivers split

    Science.gov (United States)

    Seybold, H. F.; Yi, R.; Devauchelle, O.; Petroff, A.; Rothman, D.

    2012-12-01

    River networks have fascinated mankind for centuries. They exhibit a striking geometry with similar shapes repeating on all scales. Yet, how these networks form and create these geometries remains elusive. Recently we have shown that channels fed by subsurface flow split at a characteristic angle of 2π/5 unambiguously consistent with our field measurements in a seepage network on the Florida Panhandle (Fig.1). Our theory is based only on the simple hypothesis that the channels grow in the direction at which the ground water enters the spring and classical solutions of subsurface hydrology. Here we apply our analysis to the ramification of large drainage basins and extend our theory to include slope effects. Using high resolution stream networks from the National Hydrography Dataset (NHD), we scrutinize our hypothesis in arbitrary channel networks and investigate the branching angle dependence on Horton-Strahler order and the maturity of the streams.; High-resolution topographic map of valley networks incised by groundwater flow, located on the Florida Panhandle near Bristol, FL.

  11. Split supersymmetry radiates flavor

    Science.gov (United States)

    Baumgart, Matthew; Stolarski, Daniel; Zorawski, Thomas

    2014-09-01

    Radiative flavor models where the hierarchies of Standard Model (SM) fermion masses and mixings are explained via loop corrections are elegant ways to solve the SM flavor puzzle. Here we build such a model in the context of mini-split supersymmetry (SUSY) where both flavor and SUSY breaking occur at a scale of 1000 TeV. This model is consistent with the observed Higgs mass, unification, and dark matter as a weakly interacting massive particle. The high scale allows large flavor mixing among the sfermions, which provides part of the mechanism for radiative flavor generation. In the deep UV, all flavors are treated democratically, but at the SUSY-breaking scale, the third, second, and first generation Yukawa couplings are generated at tree level, one loop, and two loops, respectively. Save for one, all the dimensionless parameters in the theory are O(1), with the exception being a modest and technically natural tuning that explains both the smallness of the bottom Yukawa coupling and the largeness of the Cabibbo angle.

  12. Applied, theoretical modeling of space-based assembly, using expert system architecture for computer-aided engineering tool development

    Science.gov (United States)

    Jolly, Steven Douglas

    1992-01-01

    The challenges associated with constructing interplanetary spacecraft and space platforms in low earth orbit are such that it is imperative that comprehensive, preliminary process planning analyses be completed before committing funds for Phase B design (detail design, development). Phase A and 'pre-Phase A' design activities will commonly address engineering questions such as mission-design structural integrity, attitude control, thermal control, etc. But the questions of constructability, maintainability and reliability during the assembly phase usually go unaddressed until the more mature stages of design (or very often production) are reached. This is an unacceptable strategy for future space missions whether they be government or commercial ventures. After interviews with expert Aerospace and Construction industry planners a new methodology was formulated and a Blackboard Metaphor Knowledge-based Expert System synthesis model has been successfully developed which can decompose interplanetary vehicles into deliverable orbital subassemblies. Constraint propagation, including launch vehicle payload shroud envelope, is accomplished with heuristic and numerical algorithms including a unique adaptation of a reasoning technique used by Stanford researchers in terrestrial automated process planning. The model is a hybrid combination of rule and frame-based representations, designed to integrate into a Computer-Aided Engineering (CAE) environment. Emphasis is placed on the actual joining, rendezvous, and refueling of the orbiting, dynamic spacecraft. Significant results of this new methodology upon a large Mars interplanetary spacecraft (736,000 kg) designed by Boeing, show high correlation to manual decomposition and planning analysis studies, but at a fraction of the time, and with little user interaction. Such Computer-Aided Engineering (CAE) tools would greatly leverage the designers ability to assess constructability.

  13. Fractal Geometry of Architecture

    Science.gov (United States)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  14. General approach for engineering small-molecule-binding DNA split aptamers.

    Science.gov (United States)

    Kent, Alexandra D; Spiropulos, Nicholas G; Heemstra, Jennifer M

    2013-10-15

    Here we report a general method for engineering three-way junction DNA aptamers into split aptamers. Split aptamers show significant potential for use as recognition elements in biosensing applications, but reliable methods for generating these sequences are currently lacking. We hypothesize that the three-way junction is a "privileged architecture" for the elaboration of aptamers into split aptamers, as it provides two potential splitting sites that are distal from the target binding pocket. We propose a general method for split aptamer engineering that involves removing one loop region, then systematically modifying the number of base pairs in the remaining stem regions in order to achieve selective assembly only in the presence of the target small molecule. We screen putative split aptamer sequence pairs using split aptamer proximity ligation (StAPL) technology developed by our laboratory, but we validate that the results obtained using StAPL translate directly to systems in which the aptamer fragments are assembling noncovalently. We introduce four new split aptamer sequences, which triples the number of small-molecule-binding DNA split aptamers reported to date, and the methods described herein provide a reliable route for the engineering of additional split aptamers, dramatically advancing the potential substrate scope of DNA assembly based biosensors.

  15. The Simulation Intranet Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  16. Architectural Drawing - an Animate Field

    DEFF Research Database (Denmark)

    Hougaard, Anna Katrine

    2015-01-01

    Architectural drawing is changing because architects today draw with computers. Due to this change digital diagrams employed by computational architectural practices are often emphasized as powerful structures of control and organisation in the design process. But there are also diagrams, which do...... ways of directing behaviour of artefacts and living things without controlling this behaviour completely. I analyse a musical composition by John Cage as an example of a sketch diagram, and then hypothesize that orthogonal, architectural drawing can work in similar ways. Thereby I hope to point out...... important affordance of architectural drawing as a ¬hybrid between the openness of hand-sketching and the rule-based-ness of diagramming, an affordance which might be useful in the migrational zone of current architectural drawing where traditional hand drawing techniques and computer drawing techniques...

  17. META-GLARE: A meta-system for defining your own computer interpretable guideline system-Architecture and acquisition.

    Science.gov (United States)

    Bottrighi, Alessio; Terenziani, Paolo

    2016-09-01

    Several different computer-assisted management systems of computer interpretable guidelines (CIGs) have been developed by the Artificial Intelligence in Medicine community. Each CIG system is characterized by a specific formalism to represent CIGs, and usually provides a manager to acquire, consult and execute them. Though there are several commonalities between most formalisms in the literature, each formalism has its own peculiarities. The goal of our work is to provide a flexible support to the extension or definition of CIGs formalisms, and of their acquisition and execution engines. Instead of defining "yet another CIG formalism and its manager", we propose META-GLARE (META Guideline Acquisition, Representation, and Execution), a "meta"-system to define new CIG systems. In this paper, META-GLARE, a meta-system to define new CIG systems, is presented. We try to capture the commonalities among current CIG approaches, by providing (i) a general manager for the acquisition, consultation and execution of hierarchical graphs (representing the control flow of actions in CIGs), parameterized over the types of nodes and of arcs constituting it, and (ii) a library of different elementary components of guidelines nodes (actions) and arcs, in which each type definition involves the specification of how objects of this type can be acquired, consulted and executed. We provide generality and flexibility, by allowing free aggregations of such elementary components to define new primitive node and arc types. We have drawn several experiments, in which we have used META-GLARE to build a CIG system (Experiment 1 in Section 8), or to extend it (Experiments 2 and 3). Such experiments show that META-GLARE provides a useful and easy-to-use support to such tasks. For instance, re-building the Guideline Acquisition, Representation, and Execution (GLARE) system using META-GLARE required less than one day (Experiment 1). META-GLARE is a meta-system for CIGs supporting fast prototyping

  18. Catalyst Architecture

    DEFF Research Database (Denmark)

    ’Catalyst Architecture’ takes its point of departure in a broadened understanding of the role of architecture in relation to developmental problems in large cities. Architectural projects frame particular functions and via their form language, they can provide the user with an aesthetic experience....... The broadened understanding of architecture consists in that an architectural project, by virtue of its placement in the context and of its composition of programs, can have a mediating role in a positive or cultural development of the district in question. In this sense, we talk about architecture as catalyst...... cities on the planet have growing pains and social cohesiveness is under pressure from an increased difference between rich and poor, social segregation, ghettoes, immigration of guest workers and refugees, commercial mass tourism etc. In this context, it is important to ask which role architecture...

  19. Split-illumination electron holography

    International Nuclear Information System (INIS)

    Tanigaki, Toshiaki; Aizawa, Shinji; Suzuki, Takahiro; Park, Hyun Soon; Inada, Yoshikatsu; Matsuda, Tsuyoshi; Taniyama, Akira; Shindo, Daisuke; Tonomura, Akira

    2012-01-01

    We developed a split-illumination electron holography that uses an electron biprism in the illuminating system and two biprisms (applicable to one biprism) in the imaging system, enabling holographic interference micrographs of regions far from the sample edge to be obtained. Using a condenser biprism, we split an electron wave into two coherent electron waves: one wave is to illuminate an observation area far from the sample edge in the sample plane and the other wave to pass through a vacuum space outside the sample. The split-illumination holography has the potential to greatly expand the breadth of applications of electron holography.

  20. An Information Architecture Framework for the USAF: Managing Information from an Enterprise Perspective

    Science.gov (United States)

    2010-03-01

    Architecture Framework (E2AF) • Computer Integrated Manufacturing Open Systems Architecture (CIMOSA) • The Open Group Architecture Framework ( TOGAF ... TOGAF The Open Group Architecture Framework USAF United States Air Force URL Uniform Resource Locator

  1. A sensor network architecture for urban traffic state estimation with mixed eulerian/lagrangian sensing based on distributed computing

    KAUST Repository

    Canepa, Edward S.

    2014-01-01

    This article describes a new approach to urban traffic flow sensing using decentralized traffic state estimation. Traffic sensor data is generated both by fixed traffic flow sensor nodes and by probe vehicles equipped with a short range transceiver. The data generated by these sensors is sent to a local coordinator node, that poses the problem of estimating the local state of traffic as a mixed integer linear program (MILP). The resulting optimization program is then solved by the nodes in a distributed manner, using branch-and-bound methods. An optimal amount of noise is then added to the maps before dissemination to a central database. Unlike existing probe-based traffic monitoring systems, this system does not transmit user generated location tracks nor any user presence information to a centralized server, effectively preventing privacy attacks. A simulation of the system performance on computer-generated traffic data shows that the system can be implemented with currently available technology. © 2014 Springer International Publishing Switzerland.

  2. Catalyst Architecture

    DEFF Research Database (Denmark)

    Kiib, Hans; Marling, Gitte; Hansen, Peter Mandal

    2014-01-01

    How can architecture promote the enriching experiences of the tolerant, the democratic, and the learning city - a city worth living in, worth supporting and worth investing in? Catalyst Architecture comprises architectural projects, which, by virtue of their location, context and their combination...... of programs, have a role in mediating positive social and/or cultural development. In this sense, we talk about architecture as a catalyst for: sustainable adaptation of the city’s infrastructure appropriate renovation of dilapidated urban districts strengthening of social cohesiveness in the city development...

  3. ISR split-field magnet

    CERN Multimedia

    CERN PhotoLab

    1975-01-01

    The experimental apparatus used at intersection 4 around the Split-Field Magnet by the CERN-Bologna Collaboration (experiment R406). The plastic scintillator telescopes are used for precise pulse-height and time-of-flight measurements.

  4. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans

    Science.gov (United States)

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-04-01

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.

  5. Radiology systems architecture.

    Science.gov (United States)

    Deibel, S R; Greenes, R A

    1996-05-01

    This article focuses on the software requirements for enterprise integration in radiology. The needs of a future radiology systems architecture are examined, both at a concrete functional level and at an abstract system-properties level. A component-based approach to software development is described and is validated in the context of each of the abstract system requirements for future radiology computing environments.

  6. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  7. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    This is a manual investigating the subject of urban ecology and systemic development from the perspective of architectural design. It sets out to explore two main goals: to discuss the contemporary relevance of a systemic practice to architectural design, and to share a toolbox of informational...... design protocols developed to describe the city as a territory of self-organization. Collecting together nearly a decade of design experiments by the authors and their practice, ecoLogicStudio, the book discusses key disciplinary definitions such as ecologic urbanism, algorithmic architecture, bottom......-up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto...

  8. Development of a computerized handbook of architectural plans

    NARCIS (Netherlands)

    Koutamanis, A.

    1990-01-01

    The dissertation investigates an approach to the development of visual / spatial computer representations for architectural purposes through the development of the computerized handbook of architectural plans (chap), a knowledge-based computer system capable of recognizing the metric properties of

  9. Minimizing the cost of splitting in Monte Carlo radiation transport simulation

    Energy Technology Data Exchange (ETDEWEB)

    Juzaitis, R.J.

    1980-10-01

    A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

  10. Minimizing the cost of splitting in Monte Carlo radiation transport simulation

    International Nuclear Information System (INIS)

    Juzaitis, R.J.

    1980-10-01

    A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma 2 /sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed

  11. Staged Event-Driven Architecture As A Micro-Architecture Of Distributed And Pluginable Crawling Platform

    Directory of Open Access Journals (Sweden)

    Leszek Siwik

    2013-01-01

    Full Text Available There are many crawling systems available on the market but they are rather close systems dedicated for performing particular kind and class of tasks with predefined set of scope, strategy etc. In real life however there are meaningful groups of users (e.g. marketing, criminal or governmental analysts requiring not just a yet another crawling system dedicated for performing predefined tasks. They need rather easy-to-use, user friendly all-in-one studio for not only executing and running internet robots and crawlers, but also for (graphical (redefining and (recomposing crawlers according to dynamically changing requirements and use-cases. To realize the above-mentioned idea, Cassiopeia framework has been designed and developed. One has to remember, however, that enormous size and unimaginable structural complexity of WWW network are the reasons that, from a technical and architectural point of view, developing effective internet robots – and the more so developing a framework supporting graphical robots’ composition – becomes a really challenging task. The crucial aspect in the context of crawling efficiency and scalability is concurrency model applied. There are two the most typical concurrency management models i.e. classical concurrency based on the pool of threads and processes and event-driven concurrency. None of them are ideal approaches. That is why, research on alternative models is still conducted to propose efficient and convenient architecture for concurrent and distributed applications. One of promising models is staged event-driven architecture mixing to some extent both of above mentioned classical approaches and providing some additional benefits such as splitting application into separate stages connected by events queues – what is interesting taking requirements about crawler (recomposition into account. The goal of this paper is to present the idea and the PoC  implementation of Cassiopeia framework, with the special

  12. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    This PhD thesis is motived by a personal interest in the theoretical, practical and creative qualities of architecture. But also a wonder and curiosity about the cultural and social relations architecture represents through its occupation with both the sciences and the arts. Inspired by present i...... with the material appearance of objects, but also the imaginary world of dreams and memories which are concealed with the communicative significance of intentions when designing the future super hospitals....... initiatives in Aalborg Hospital to overcome patient undernutrition by refurbishing eating environments, this thesis engages in an investigation of the interior architectural qualities of patient eating environments. The relevance for this holistic perspective, synthesizing health, food and architecture......, is the current building of a series of Danish ‘super hospitals’ and an increased focus among architectural practices on research-based knowledge produced with the architectural sub-disciplines Healing Architecture and Evidence-Based Design. The problem is that this research does not focus on patient eating...

  13. Humanizing Architecture

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2015-01-01

    The article proposes the urban digital gallery as an opportunity to explore the relationship between ‘human’ and ‘technology,’ through the programming of media architecture. It takes a curatorial perspective when proposing an ontological shift from considering media facades as visual spectacles...... agency and a sense of being by way of dematerializing architecture. This is achieved by way of programming the symbolic to provide new emotional realizations and situations of enlightenment in the public audience. This reflects a greater potential to humanize the digital in media architecture....

  14. Healing Architecture

    DEFF Research Database (Denmark)

    Folmer, Mette Blicher; Mullins, Michael; Frandsen, Anne Kathrine

    2012-01-01

    The project examines how architecture and design of space in the intensive unit promotes or hinders interaction between relatives and patients. The primary starting point is the relatives. Relatives’ support and interaction with their loved ones is important in order to promote the patients healing...... process. Therefore knowledge on how space can support interaction is fundamental for the architect, in order to make the best design solutions. Several scientific studies document that the hospital's architecture and design are important for human healing processes, including how the physical environment...... architectural and design solutions in order to improve quality of interaction between relative and patient in the hospital's intensive unit....

  15. Architectural technology

    DEFF Research Database (Denmark)

    2005-01-01

    The booklet offers an overall introduction to the Institute of Architectural Technology and its projects and activities, and an invitation to the reader to contact the institute or the individual researcher for further information. The research, which takes place at the Institute of Architectural...... Technology at the Roayl Danish Academy of Fine Arts, School of Architecture, reflects a spread between strategic, goal-oriented pilot projects, commissioned by a ministry, a fund or a private company, and on the other hand projects which originate from strong personal interests and enthusiasm of individual...

  16. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  17. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    Science.gov (United States)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  18. Quantifying the impact of soil compaction on root system architecture in tomato (Solanum lycopersicum) by X-ray micro-computed tomography.

    Science.gov (United States)

    Tracy, Saoirse R; Black, Colin R; Roberts, Jeremy A; Sturrock, Craig; Mairhofer, Stefan; Craigon, Jim; Mooney, Sacha J

    2012-07-01

    We sought to explore the interactions between roots and soil without disturbance and in four dimensions (i.e. 3-D plus time) using X-ray micro-computed tomography. The roots of tomato Solanum lycopersicum 'Ailsa Craig' plants were visualized in undisturbed soil columns for 10 consecutive days to measure the effect of soil compaction on selected root traits including elongation rate. Treatments included bulk density (1.2 vs. 1.6 g cm(-3)) and soil type (loamy sand vs. clay loam). Plants grown at the higher soil bulk density exploited smaller soil volumes (P < 0.05) and exhibited reductions in root surface area (P < 0.001), total root volume (P < 0.001) and total root length (P < 0.05), but had a greater mean root diameter (P < 0.05) than at low soil bulk density. Swelling of the root tip area was observed in compacted soil (P < 0.05) and the tortuosity of the root path was also greater (P < 0.01). Root elongation rates varied greatly during the 10-d observation period (P < 0.001), increasing to a maximum at day 2 before decreasing to a minimum at day 4. The emergence of lateral roots occurred later in plants grown in compacted soil (P < 0.01). Novel rooting characteristics (convex hull volume, centroid and maximum width), measured by image analysis, were successfully employed to discriminate treatment effects. The root systems of plants grown in compacted soil had smaller convex hull volumes (P < 0.05), a higher centre of mass (P < 0.05) and a smaller maximum width than roots grown in uncompacted soil. Soil compaction adversely affects root system architecture, influencing resource capture by limiting the volume of soil explored. Lateral roots formed later in plants grown in compacted soil and total root length and surface area were reduced. Root diameter was increased and swelling of the root tip occurred in compacted soil.

  19. The effects of the Er:YAG laser on trabecular bone micro-architecture: Comparison with conventional dental drilling by micro-computed tomographic and histological techniques.

    Science.gov (United States)

    Zeitouni, Jihad; Clough, Bret; Zeitouni, Suzanne; Saleem, Mohammed; Al Aisami, Kenan; Gregory, Carl

    2017-01-01

    Background : The use of lasers has become increasingly common in the field of medicine and dentistry, and there is a growing need for a deeper understanding of the procedure and its effects on tissue. The aim of this study was to compare the erbium-doped yttrium aluminium garnet (Er:YAG) laser and conventional drilling techniques, by observing the effects on trabecular bone microarchitecture and the extent of thermal and mechanical damage. Methods : Ovine femoral heads were employed to mimic maxillofacial trabecular bone, and cylindrical osteotomies were generated to mimic implant bed preparation. Various laser parameters were tested, as well as a conventional dental drilling technique. The specimens were then subjected to micro-computed tomographic (μCT) histomorphometic analysis and histology. Results : Herein, we demonstrate that mCT measurements of trabecular porosity provide quantitative evidence that laser-mediated cutting preserves the trabecular architecture and reduces thermal and mechanical damage at the margins of the cut. We confirmed these observations with histological studies. In contrast with laser-mediated cutting, conventional drilling resulted in trabecular collapse, reduction of porosity at the margin of the cut and histological signs of thermal damage. Conclusions : This study has demonstrated, for the first time, that mCT and quantification of porosity at the margin of the cut provides a quantitative insight into damage caused by bone cutting techniques. We further show that with laser-mediated cutting, the marrow remains exposed to the margins of the cut, facilitating cellular infiltration and likely accelerating healing. However, with drilling, trabecular collapse and thermal damage is likely to delay healing by restricting the passage of cells to the site of injury and causing localized cell death.

  20. Architectured Nanomembranes

    Energy Technology Data Exchange (ETDEWEB)

    Sturgeon, Matthew R. [Former ORNL postdoc; Hu, Michael Z. [ORNL

    2017-07-01

    This paper has reviewed the frontier field of “architectured membranes” that contains anisotropic oriented porous nanostructures of inorganic materials. Three example types of architectured membranes were discussed with some relevant results from our own research: (1) anodized thin-layer titania membranes on porous anodized aluminum oxide (AAO) substrates of different pore sizes, (2) porous glass membranes on alumina substrate, and (3) guest-host membranes based on infiltration of yttrium-stabilized zirconia inside the pore channels of AAO matrices.

  1. Analysis of Three Multilevel Security Architectures

    OpenAIRE

    Levin, Timothy, E.; Irvine, Cynthia E.; Weissman, Clark; Nguyen, Thuy D.

    2007-01-01

    Proceedings of the Computer Security Architecture Workshop, ACM. November 2, 2007, Fairfax, Virginia, USA. pp. 37-46 Various system architectures have been proposed for high assurance enforcement of multilevel security. This paper provides an analysis of the relative merits of three architectural types, one based on a security kernel, another based on a traditional separation kernel, and a third based on a least-privilege separation kernel. We introduce the Least Privilege architecture, w...

  2. Splitting strings on integrable backgrounds

    International Nuclear Information System (INIS)

    Vicedo, Benoit

    2011-05-01

    We use integrability to construct the general classical splitting string solution on R x S 3 . Namely, given any incoming string solution satisfying a necessary self-intersection property at some given instant in time, we use the integrability of the worldsheet σ-model to construct the pair of outgoing strings resulting from a split. The solution for each outgoing string is expressed recursively through a sequence of dressing transformations, the parameters of which are determined by the solutions to Birkhoff factorization problems in an appropriate real form of the loop group of SL 2 (C). (orig.)

  3. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  4. Comparing root architectural models

    Science.gov (United States)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  5. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  6. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  7. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Based on the repetitive architecture from the “building boom” from 1960 to 1973, it is discussed how architects can handle these Danish element and montage buildings through the transformation to upgraded aesthetical, functional and energy efficient architecture. The method used is analysis...... of cases, parallels to literature studies and client and producer interviews. The analysis compares best practice in Denmark and best practice in Austria. Modern architects accepted the fact that industrialized architecture told the storey of repetition and monotony as basic condition. This article aims...... to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...

  8. Transitioning ISR architecture into the cloud

    Science.gov (United States)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  9. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  10. Photoelectrochemical water splitting in separate oxygen and hydrogen cells

    Science.gov (United States)

    Landman, Avigail; Dotan, Hen; Shter, Gennady E.; Wullenkord, Michael; Houaijia, Anis; Maljusch, Artjom; Grader, Gideon S.; Rothschild, Avner

    2017-06-01

    Solar water splitting provides a promising path for sustainable hydrogen production and solar energy storage. One of the greatest challenges towards large-scale utilization of this technology is reducing the hydrogen production cost. The conventional electrolyser architecture, where hydrogen and oxygen are co-produced in the same cell, gives rise to critical challenges in photoelectrochemical water splitting cells that directly convert solar energy and water to hydrogen. Here we overcome these challenges by separating the hydrogen and oxygen cells. The ion exchange in our cells is mediated by auxiliary electrodes, and the cells are connected to each other only by metal wires, enabling centralized hydrogen production. We demonstrate hydrogen generation in separate cells with solar-to-hydrogen conversion efficiency of 7.5%, which can readily surpass 10% using standard commercial components. A basic cost comparison shows that our approach is competitive with conventional photoelectrochemical systems, enabling safe and potentially affordable solar hydrogen production.

  11. Split supersymmetry in brane models

    Indian Academy of Sciences (India)

    Type-I string theory in the presence of internal magnetic fields provides a concrete realization of split supersymmetry. To lowest order, gauginos are massless while squarks and sleptons are superheavy. For weak magnetic fields, the correct Standard Model spectrum guarantees gauge coupling unification with sin2 W ...

  12. VBSCan Split 2017 Workshop Summary

    Energy Technology Data Exchange (ETDEWEB)

    Anders, Christoph Falk; et al.

    2018-01-12

    This document summarises the talks and discussions happened during the VBSCan Split17 workshop, the first general meeting of the VBSCan COST Action network. This collaboration is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.

  13. Split supersymmetry in brane models

    Indian Academy of Sciences (India)

    journal of. November 2006 physics pp. 793–802. Split supersymmetry in brane models. IGNATIOS ANTONIADIS∗. Department of Physics, CERN-Theory Division, 1211 Geneva 23, Switzerland. E-mail: Ignatios. ... that LEP data favor the unification of the three SM gauge couplings are smoking guns for the presence of new ...

  14. Water splitting by cooperative catalysis

    NARCIS (Netherlands)

    Hetterscheid, D.G.H.; van der Vlugt, J.I.; de Bruin, B.; Reek, J.N.H.

    2009-01-01

    A mononuclear Ru complex is shown to efficiently split water into H2 and O2 in consecutive steps through a heat- and light-driven process (see picture). Thermally driven H2 formation involves the aid of a non-innocent ligand scaffold, while dioxygen is generated by initial photochemically induced

  15. On split Lie triple systems

    Indian Academy of Sciences (India)

    Lie triple system; system of roots; root space; split Lie algebra; structure theory. 1. Introduction and previous definitions. Throughout this paper, Lie triple systems T are considered of arbitrary dimension and over an arbitrary field K. It is worth to mention that, unless otherwise stated, there is not any restriction on dim Tα or {k ...

  16. On split Lie triple systems

    Indian Academy of Sciences (India)

    The key tool in this job is the notion of connection of roots in the framework of split Lie triple systems. Author Affiliations. Antonio J Calderón Martín1. Departamento de Matemáticas, Universidad de Cádiz, 11510 Puerto Real, Cádiz, Spain. Dates. Manuscript received: 25 January 2008. Proceedings – Mathematical Sciences.

  17. Improved Fast Centralized Retransmission Scheme for High-Layer Functional Split in 5G Network

    Science.gov (United States)

    Xu, Sen; Hou, Meng; Fu, Yu; Bian, Honglian; Gao, Cheng

    2018-01-01

    In order to satisfy the varied 5G critical requirements and the virtualization of the RAN hardware, a two-level architecture for 5G RAN has been studied in 3GPP 5G SI stage. The performance of the PDCP-RLC split option and intra-RLC split option, two mainly concerned options for high layer functional split, exist an ongoing debate. This paper firstly gives an overview of CU-DU split study work in 3GPP. By the comparison of implementation complexity, the standardization impact and system performance, our evaluation result shows the PDCP-RLC split Option outperforms the intra-RLC split option. Aiming to how to reduce the retransmission delay during the intra-CU inter-DU handover, the mainly drawback of PDCP-RLC split option, this paper proposes an improved fast centralized retransmission solution with a low implementation complexity. Finally, system level simulations show that the PDCP-RLC split option with the proposed scheme can significantly improve the UE’s experience.

  18. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  19. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  20. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...

  1. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials......Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...

  2. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  3. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... identifies new rationales related to this development, and it argues that “cultural planning” has increasingly shifted its focus from a cultural institutional approach to a more market-oriented strategy that integrates art and business. The role of architecture has changed, too. It not only provides...

  4. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    anthropology. On the one hand, there are obviously good reasons for developing architecture based on anthropological insights in local contexts and anthropologically inspired techniques for ‘collaborative formation of issues’. Houses and built environments are huge investments, their life expectancy...... and other spaces that architects are preoccupied with. On the other hand, the distinction between architecture and design is not merely one of scale. Design and architecture represent – at least in Denmark – also quite different disciplinary traditions and methods. Where designers develop prototypes......, architects tend to work with models and plans that are not easily understood by lay people. Further, many architects are themselves sceptical towards notions of user-involvement and collaborative design. They fear that the imagination of citizens and users is restricted to what they are already familiar with...

  5. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  6. Reframing Architecture

    DEFF Research Database (Denmark)

    Riis, Søren

    2013-01-01

    I would like to thank Prof. Stephen Read (2011) and Prof. Andrew Benjamin (2011) for both giving inspiring and elaborate comments on my article “Dwelling in-between walls: the architectural surround”. As I will try to demonstrate below, their two different responses not only supplement my article...... focuses on how the absence of an initial distinction might threaten the endeavour of my paper. In my reply to Read and Benjamin, I will discuss their suggestions and arguments, while at the same time hopefully clarifying the postphenomenological approach to architecture....

  7. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  8. COMPUTING

    CERN Document Server

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  9. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    of green architecture. The paper argues that this greenification of facades is insufficient. The green is only a skin cladding the exterior envelope without having a spatial significance. Through the paper it is proposed to flip the order of words from green architecture to architectural green...... that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  13. Architecture Analysis

    NARCIS (Netherlands)

    Iacob, Maria-Eugenia; Jonkers, Henk; van der Torre, Leon; de Boer, Frank S.; Bonsangue, Marcello; Stam, Andries W.; Lankhorst, Marc M.; Quartel, Dick A.C.; Aldea, Adina; Lankhorst, Marc

    2017-01-01

    This chapter also explains what the added value of enterprise architecture analysis techniques is in addition to existing, more detailed, and domain-specific ones for business processes or software, for example. Analogous to the idea of using the ArchiMate enterprise modelling language to integrate

  14. Metabolistic Architecture

    DEFF Research Database (Denmark)

    2013-01-01

    Textile Spaces presents different approaches to using textile as a spatial definer and artistic medium. The publication collages images and text, art and architecture, science, philosophy and literature, process and product, past, present and future. It forms an insight into soft materials...

  15. Textile Architecture

    DEFF Research Database (Denmark)

    Heimdal, Elisabeth Jacobsen

    2010-01-01

    Textiles can be used as building skins, adding new aesthetic and functional qualities to architecture. Just like we as humans can put on a coat, buildings can also get dressed. Depending on our mood, or on the weather, we can change coat, and so can the building. But the idea of using textiles...

  16. Stability of split Stirling refrigerators

    International Nuclear Information System (INIS)

    Waele, A T A M de; Liang, W

    2009-01-01

    In many thermal systems spontaneous mechanical oscillations are generated under the influence of large temperature gradients. Well-known examples are Taconis oscillations in liquid-helium cryostats and oscillations in thermoacoustic systems. In split Stirling refrigerators the compressor and the cold finger are connected by a flexible tube. The displacer in the cold head is suspended by a spring. Its motion is pneumatically driven by the pressure oscillations generated by the compressor. In this paper we give the basic dynamic equations of split Stirling refrigerators and investigate the possibility of spontaneous mechanical oscillations if a large temperature gradient develops in the cold finger, e.g. during or after cool down. These oscillations would be superimposed on the pressure oscillations of the compressor and could ruin the cooler performance.

  17. Marc Treib: Representing Landscape Architecture

    DEFF Research Database (Denmark)

    Braae, Ellen Marie

    2008-01-01

    The editor of Representing Landscape Architecture, Marc Treib, argues that there is good reason to evaluate the standard practices of representation that landscape architects have been using for so long. In the rush to the promised land of computer design these practices are now in danger of being...... left by the wayside. The 14 often both fitting and well crafted contributions of this publication offer an approach to how landscape architecture has been and is currently represented; in the design study, in presentation, in criticism, and in the creation of landscape architecture....

  18. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    the retrofitting of the existing concrete element blocks from the period. Related to the actual demands to the building physic problems a new industrialized period has started based on light-weight elements basically made of wooden structures and faced with different suitable materials meant for individual...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  19. Geometrical Applications of Split Octonions

    Directory of Open Access Journals (Sweden)

    Merab Gogberashvili

    2015-01-01

    Full Text Available It is shown that physical signals and space-time intervals modeled on split-octonion geometry naturally exhibit properties from conventional (3 + 1-theory (e.g., number of dimensions, existence of maximal velocities, Heisenberg uncertainty, and particle generations. This paper demonstrates these properties using an explicit representation of the automorphisms on split-octonions, the noncompact form of the exceptional Lie group G2. This group generates specific rotations of (3 + 4-vector parts of split octonions with three extra time-like coordinates and in infinitesimal limit imitates standard Poincare transformations. In this picture translations are represented by noncompact Lorentz-type rotations towards the extra time-like coordinates. It is shown how the G2 algebra’s chirality yields an intrinsic left-right asymmetry of a certain 3-vector (spin, as well as a parity violating effect on light emitted by a moving quantum system. Elementary particles are connected with the special elements of the algebra which nullify octonionic intervals. Then the zero-norm conditions lead to free particle Lagrangians, which allow virtual trajectories also and exhibit the appearance of spatial horizons governing by mass parameters.

  20. 7 CFR 51.2002 - Split shell.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split shell. 51.2002 Section 51.2002 Agriculture... Standards for Grades of Filberts in the Shell 1 Definitions § 51.2002 Split shell. Split shell means a shell... of the shell, measured in the direction of the crack. ...