WorldWideScience

Sample records for hardware task migration

  1. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  2. Hardware processors for pattern recognition tasks in experiments with wire chambers

    International Nuclear Information System (INIS)

    Verkerk, C.

    1975-01-01

    Hardware processors for pattern recognition tasks in experiments with multiwire proportional chambers or drift chambers are described. They vary from simple ones used for deciding in real time if particle trajectories are straight to complex ones for recognition of curved tracks. Schematics and block-diagrams of different processors are shown

  3. The Nicest way to migrate your Windows computer ( The Windows 2000 Migration Task Force)

    CERN Document Server

    2001-01-01

    With Windows 2000, CERN users will discover a more stable and reliable working environment and will have access to all the latest applications. The Windows 2000 Migration Task Force - a representative from each division.

  4. A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Borja Bordel

    2018-03-01

    Full Text Available Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices. On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations.

  5. Hardware And Software Architectures For Reconfigurable Time-Critical Control Tasks

    Directory of Open Access Journals (Sweden)

    Adam Piłat

    2007-01-01

    Full Text Available The most popular configuration of the controlled laboratory test-rigs is the personalcomputer (PC equipped with the I/O board. The dedicated software components allowsto conduct a wide range of user-defined tasks. The typical configuration functionality canbe customized by PC hardware components and their programmable reconfiguration. Thenext step in the automatic control system design is the embedded solution. Usually, thedesign process of the embedded control system is supported by the high-level software. Thededicated programming tools support multitasking property of the microcontroller by selectionof different sampling frequencies of algorithm blocks. In this case the multi-layer andmultitasking control strategy can be realized on the chip. The proposed solutions implementrapid prototyping approach. The available toolkits and device drivers integrate system-leveldesign environment and the real-time application software, transferring the functionality ofMATLAB/Simulink programs to PCs or microcontrolers application environment.

  6. Hardware task-status manager for an RTOS with FIFO communication

    NARCIS (Netherlands)

    Zaykov, P.G.; Kuzmanov, G.; Molnos, A.M.; Goossens, K.G.W.

    2014-01-01

    In this paper, we address the problem of improving the performance of real-time embedded Multiprocessor System-on-Chip (MPSoC). Such MPSoCs often execute data-flow applications composed of multiple tasks, which communicate through First-In-First-Out (FIFO) queues. The tasks on each processor in the

  7. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Alimonda Andrea

    2008-01-01

    Full Text Available Abstract Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  8. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Andrea Acquaviva

    2008-01-01

    Full Text Available Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  9. Coherent visualization of spatial data adapted to roles, tasks, and hardware

    Science.gov (United States)

    Wagner, Boris; Peinsipp-Byma, Elisabeth

    2012-06-01

    Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).

  10. The Effect of Predicted Vehicle Displacement on Ground Crew Task Performance and Hardware Design

    Science.gov (United States)

    Atencio, Laura Ashley; Reynolds, David W.

    2011-01-01

    NASA continues to explore new launch vehicle concepts that will carry astronauts to low- Earth orbit to replace the soon-to-be retired Space Transportation System (STS) shuttle. A tall vertically stacked launch vehicle (> or =300 ft) is exposed to the natural environment while positioned on the launch pad. Varying directional winds and vortex shedding cause the vehicle to sway in an oscillating motion. Ground crews working high on the tower and inside the vehicle during launch preparations will be subjected to this motion while conducting critical closeout tasks such as mating fluid and electrical connectors and carrying heavy objects. NASA has not experienced performing these tasks in such environments since the Saturn V, which was serviced from a movable (but rigid) service structure; commercial launchers are likewise attended by a service structure that moves away from the vehicle for launch. There is concern that vehicle displacement may hinder ground crew operations, impact the ground system designs, and ultimately affect launch availability. The vehicle sway assessment objective is to replicate predicted frequencies and displacements of these tall vehicles, examine typical ground crew tasks, and provide insight into potential vehicle design considerations and ground crew performance guidelines. This paper outlines the methodology, configurations, and motion testing performed while conducting the vehicle displacement assessment that will be used as a Technical Memorandum for future vertically stacked vehicle designs.

  11. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  12. Task 9. Deployment of photovoltaic technologies: co-operation with developing countries. The role of quality management, hardware certification and accredited training in PV programmes in developing countries

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, M. C. [Institute for Sustainable Power, Highlands Ranch, CO (United States); Oldach, R.; Bates, J. [IT Power Ltd, The Manor house, Chineham (United Kingdom)

    2003-09-15

    This report for the International Energy Agency (IEA) made by Task 9 of the Photovoltaic Power Systems (PVPS) programme takes a look at the role of quality management, hardware certification and accredited training in PV programmes in developing countries. The objective of this document is to provide assistance to those project developers that are interested in implementing or improving support programmes for the deployment of PV systems for rural electrification. It is to enable them to address and implement quality assurance measures, with an emphasis on management, technical and training issues and other factors that should be considered for the sustainable implementation of rural electrification programmes. It is considered important that quality also addresses the socio-economic and the socio-technical aspects of a programme concept. The authors summarise that, for a PV programme, there are three important areas of quality control to be implemented: quality management, technical standards and quality of training.

  13. Task Migration for Fault-Tolerance in Mixed-Criticality Embedded Systems

    DEFF Research Database (Denmark)

    Saraswat, Prabhat Kumar; Pop, Paul; Madsen, Jan

    2009-01-01

    In this paper we are interested in mixed-criticality embedded applications implemented on distributed architectures. Depending on their time-criticality, tasks can be hard or soft real-time and regarding safety-criticality, tasks can be fault-tolerant to transient faults, permanent faults, or have...... processors, such that the faults are tolerated, the deadlines for the hard real-time tasks are satisfied and the QoS for soft tasks is maximized. The proposed online adaptive approach has been evaluated using several synthetic benchmarks and a real-life case study....... no dependability requirements. We use Earliest Deadline First (EDF) scheduling for the hard tasks and the Constant Bandwidth Server (CBS) for the soft tasks. The CBS parameters determine the quality of service (QoS) of soft tasks. Transient faults are tolerated using checkpointing with roll- back recovery...

  14. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  15. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  16. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  17. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  18. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  19. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  20. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  1. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  2. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  3. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  4. Automation of Flexible Migration Workflows

    Directory of Open Access Journals (Sweden)

    Dirk von Suchodoletz

    2011-03-01

    Full Text Available Many digital preservation scenarios are based on the migration strategy, which itself is heavily tool-dependent. For popular, well-defined and often open file formats – e.g., digital images, such as PNG, GIF, JPEG – a wide range of tools exist. Migration workflows become more difficult with proprietary formats, as used by the several text processing applications becoming available in the last two decades. If a certain file format can not be rendered with actual software, emulation of the original environment remains a valid option. For instance, with the original Lotus AmiPro or Word Perfect, it is not a problem to save an object of this type in ASCII text or Rich Text Format. In specific environments, it is even possible to send the file to a virtual printer, thereby producing a PDF as a migration output. Such manual migration tasks typically involve human interaction, which may be feasible for a small number of objects, but not for larger batches of files.We propose a novel approach using a software-operated VNC abstraction layer in order to replace humans with machine interaction. Emulators or virtualization tools equipped with a VNC interface are very well suited for this approach. But screen, keyboard and mouse interaction is just part of the setup. Furthermore, digital objects need to be transferred into the original environment in order to be extracted after processing. Nevertheless, the complexity of the new generation of migration services is quickly rising; a preservation workflow is now comprised not only of the migration tool itself, but of a complete software and virtual hardware stack with recorded workflows linked to every supported migration scenario. Thus the requirements of OAIS management must include proper software archiving, emulator selection, system image and recording handling. The concept of view-paths could help either to automatically determine the proper pre-configured virtual environment or to set up system

  5. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  6. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  7. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  8. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  9. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  10. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  11. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  12. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  13. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  14. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  15. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  16. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  17. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  18. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  19. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  20. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  1. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  2. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  3. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  4. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  5. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  6. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  7. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  8. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  9. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  10. Migration of trochanteric cerclage cable debris to the knee joint

    Directory of Open Access Journals (Sweden)

    Kathleen M. Kollitz, BS

    2014-01-01

    Full Text Available Migrating orthopedic hardware has widely been reported in the literature. Most reported cases of migrating hardware involve smooth Kirschner wires or loosening/fracture of hardware involved with joint stabilization/fixation. It is unusual for hardware to migrate within the soft tissues. In some cases, smooth Kirschner wires have migrated within the thoracic cage—a proposed mechanism for this phenomenon is the negative intrathoracic pressure. While wires have also been reported to gain access to circulation, transporting them over larger distances, the majority of broken or retained wires remain local. We report a case of a 34-year-old man in whom numerous fragments of braided cable migrated from the hip to the knee.

  11. Practical Data Migration

    CERN Document Server

    Morris, Johny

    2012-01-01

    This book is for executives and practitioners tasked with the movement of data from old systems to a new repository. It uses a series of steps guaranteed to get the reader from an empty new system to one that is working and backed by the user population. Using this proven methodology will vastly increase the chances of a successful migration.

  12. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  13. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  14. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  15. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  16. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  17. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  18. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  19. Biomedical applications engineering tasks

    Science.gov (United States)

    Laenger, C. J., Sr.

    1976-01-01

    The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.

  20. Repeat migration and disappointment.

    Science.gov (United States)

    Grant, E K; Vanderkamp, J

    1986-01-01

    This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.

  1. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  2. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  3. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  4. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  5. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  6. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  7. Migration chemistry

    International Nuclear Information System (INIS)

    Carlsen, L.

    1992-05-01

    Migration chemistry, the influence of chemical -, biochemical - and physico-chemical reactions on the migration behaviour of pollutants in the environment, is an interplay between the actual natur of the pollutant and the characteristics of the environment, such as pH, redox conditions and organic matter content. The wide selection of possible pollutants in combination with varying geological media, as well as the operation of different chemical -, biochemical - and physico-chemical reactions compleactes the prediction of the influence of these processes on the mobility of pollutants. The report summarizes a wide range of potential pollutants in the terrestrial environment as well as a variety of chemical -, biochemical - and physico-chemical reactions, which can be expected to influence the migration behaviour, comprising diffusion, dispersion, convection, sorption/desorption, precipitation/dissolution, transformations/degradations, biochemical reactions and complex formation. The latter comprises the complexation of metal ions as well as non-polar organics to naturally occurring organic macromolecules. The influence of the single types of processes on the migration process is elucidated based on theoretical studies. The influence of chemical -, biochemical - and physico-chemical reactions on the migration behaviour is unambiguous, as the processes apparently control the transport of pollutants in the terrestrial environment. As the simple, conventional K D concept breaks down, it is suggested that the migration process should be described in terms of the alternative concepts chemical dispersion, average-elution-time and effective retention. (AB) (134 refs.)

  8. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  9. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  10. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  11. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  12. Lessons learned from hardware and software upgrades of IT-DB services

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of recent changes in CERN database infrastructure. The presentation describes database service evolution, in particular new hardware & storage installation, integration with Agile infrastructure, complexity of validation strategy and finally the migration and upgrade process concerning the most critical database services.

  13. Hardware Support for Dynamic Languages

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Karlsson, Sven; Probst, Christian W.

    2011-01-01

    In recent years, dynamic programming languages have enjoyed increasing popularity. For example, JavaScript has become one of the most popular programming languages on the web. As the complexity of web applications is growing, compute-intensive workloads are increasingly handed off to the client...... side. While a lot of effort is put in increasing the performance of web browsers, we aim for multicore systems with dedicated cores to effectively support dynamic languages. We have designed Tinuso, a highly flexible core for experimentation that is optimized for high performance when implemented...... on FPGA. We composed a scalable multicore configuration where we study how hardware support for software speculation can be used to increase the performance of dynamic languages....

  14. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  15. Migrating Worker

    DEFF Research Database (Denmark)

    Hansen, Hans

    This is the preliminary report on the results obtained in the Migrating Worker-project. This project was initiated by the Danish Ministry of Finance with the aim of illustrating the effects of the 1408/71 agreement and the bilateral double taxation agreements Denmark has with the countries included...

  16. Dateline Migration.

    Science.gov (United States)

    Tomasi, Lydio E., Ed.

    1995-01-01

    Presents data on international migration and its effects in and between various countries in North America, Europe, and Africa. Discussions include refugee, immigrant, and migrant worker flows; the legal, political, and social problems surrounding immigrants; alien terrorism and law enforcement problems; and migrant effects on education, social…

  17. A Framework for Hardware-Accelerated Services Using Partially Reconfigurable SoCs

    Directory of Open Access Journals (Sweden)

    MACHIDON, O. M.

    2016-05-01

    Full Text Available The current trend towards ?Everything as a Service? fosters a new approach on reconfigurable hardware resources. This innovative, service-oriented approach has the potential of bringing a series of benefits for both reconfigurable and distributed computing fields by favoring a hardware-based acceleration of web services and increasing service performance. This paper proposes a framework for accelerating web services by offloading the compute-intensive tasks to reconfigurable System-on-Chip (SoC devices, as integrated IP (Intellectual Property cores. The framework provides a scalable, dynamic management of the tasks and hardware processing cores, based on dynamic partial reconfiguration of the SoC. We have enhanced security of the entire system by making use of the built-in detection features of the hardware device and also by implementing active counter-measures that protect the sensitive data.

  18. Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators

    DEFF Research Database (Denmark)

    Schou, Casper; Bøgh, Simon; Madsen, Ole

    2014-01-01

    This paper presents concepts and ideas of a hard- ware reconfiguration framework for modular industrial mobile manipulators. Mobile manipulators pose a highly flexible pro- duction resource due to their ability to autonomously navigate between workstations. However, due to this high flexibility new...... approaches to the operation of the robots are needed. Reconfig- uring the robot to a new task should be carried out by shop floor operators and, thus, be both quick and intuitive. Late research has already proposed a method for intuitive robot programming. However, this relies on a predetermined hardware...... configuration. Finding a single multi-purpose hardware configuration suited to all tasks is considered unrealistic. As a result, the need for reconfiguration of the hardware is inevitable. In this paper an outline of a framework for making hardware reconfiguration quick and intuitive is presented. Two main...

  19. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  20. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  1. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  2. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  3. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  4. Software-Controlled Dynamically Swappable Hardware Design in Partially Reconfigurable Systems

    Directory of Open Access Journals (Sweden)

    Huang Chun-Hsian

    2008-01-01

    Full Text Available Abstract We propose two basic wrapper designs and an enhanced wrapper design for arbitrary digital hardware circuit designs such that they can be enhanced with the capability for dynamic swapping controlled by software. A hardware design with either of the proposed wrappers can thus be swapped out of the partially reconfigurable logic at runtime in some intermediate state of computation and then swapped in when required to continue from that state. The context data is saved to a buffer in the wrapper at interruptible states, and then the wrapper takes care of saving the hardware context to communication memory through a peripheral bus, and later restoring the hardware context after the design is swapped in. The overheads of the hardware standardization and the wrapper in terms of additional reconfigurable logic resources and the time for context switching are small and generally acceptable. With the capability for dynamic swapping, high priority hardware tasks can interrupt low-priority tasks in real-time embedded systems so that the utilization of hardware space per unit time is increased.

  5. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  6. A preference for migration

    OpenAIRE

    Stark, Oded

    2007-01-01

    At least to some extent migration behavior is the outcome of a preference for migration. The pattern of migration as an outcome of a preference for migration depends on two key factors: imitation technology and migration feasibility. We show that these factors jointly determine the outcome of a preference for migration and we provide examples that illustrate how the prevalence and transmission of a migration-forming preference yield distinct migration patterns. In particular, the imitation of...

  7. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  8. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  9. Fingerprint Sensors: Liveness Detection Issue and Hardware based Solutions

    Directory of Open Access Journals (Sweden)

    Shahzad Memon

    2012-01-01

    Full Text Available Securing an automated and unsupervised fingerprint recognition system is one of the most critical and challenging tasks in government and commercial applications. In these systems, the detection of liveness of a finger placed on a fingerprint sensor is a major issue that needs to be addressed in order to ensure the credibility of the system. The main focus of this paper is to review the existing fingerprint sensing technologies in terms of liveness detection and discusses hardware based ‘liveness detection’ techniques reported in the literature for automatic fingerprint biometrics.

  10. Dispersal and migration

    Directory of Open Access Journals (Sweden)

    Schwarz, C.

    2004-06-01

    philopatric movement of geese using a classic multi–state design. Previous studies of philopaty often rely upon simple return rates —however, good mark–recapture studies do not need to assume equal detection probabilities in space and time. This is likely the most important contribution of multi–state modelling to the study of movement. As with many of these studies, the most pressing problem in the analysis is the explosion in the number of parameters and the need to choose parsimonious modelss to get good precision. Drake and Alisauska demonstrate that model choice still remains an art with a great deal of biological insight being very helpful in the task. There is still plenty of scope for novel methods to study migration. Traditionally, there has been a clear cut distinction between birds being labelled as “migrant” or “resident” on the basis of field observations and qualitative interpretations of patterns of ring–recoveries. However, there are intermediate species where only part of the population migrates (partial migrants or where different components of the population migrate to different extents (differential migrants. Siriwardena, Wernham and Baillie (Siriwardena et al., 2004 develop a novel method that produces a quantitative index of migratory tendency. The method uses distributions of ringing–to–recovery distances to classify individual species’ patterns of movement relative to those of other species. The areas between species’ cumulative distance distributions are used with multi–dimensional scaling to produce a similarity map among species. This map can be used to investigate the factors that affect the migratory strategies that species adopt, such as body size, territoriality and distribution, and in studies of their consequences for demographic parameters such as annual survival and the timing of breeding. The key assumption of the method is the similar recovery effort of species over space and time. It would be interesting to

  11. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  12. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  13. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  14. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  15. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  16. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  17. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  18. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  19. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  20. SL(C) 5 migration at CERN

    International Nuclear Information System (INIS)

    Schwickerath, Ulrich; Silva, Ricardo

    2010-01-01

    Most LCG sites are currently running on SL(C)4. However, this operating system is already rather old, and it is becoming difficult to get the required hardware drivers, to get the best out of recent hardware. A possible way out is the migration to SL(C)5 based systems where possible, in combination with virtualization methods. The former is typically possible for nodes where the software to run the services is available and tested, while the latter offers a possibility to make use of the new hardware platforms whilst maintaining operating system compatibility. Since autumn 2008, CERN has offered public interactive and batch worker nodes for evaluation to the experiments. For the Grid environment, access is granted by a dedicated CEs. The status of the evaluation, feedback received from the experiments and the status of the migration will be reviewed, and the status of virtualization of services at CERN will be reported. Beyond this, the migration to a new operating system also offers an excellent opportunity to upgrade the fabric infrastructure used to manage the servers.

  1. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  2. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  3. The Globalisation of migration

    OpenAIRE

    Milan Mesić

    2002-01-01

    The paper demonstrates that contemporary international migration is a constitutive part of the globalisation process. After defining the concepts of globalisation and the globalisation of migration, the author discusses six key themes, linking globalisation and international migration (“global cities”, the scale of migration; diversification of migration flows; globalisation of science and education; international migration and citizenship; emigrant communities and new identities). First, in ...

  4. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  5. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  6. Dedicated memory structure holding data for detecting available worker thread(s) and informing available worker thread(s) of task(s) to execute

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, George L.; Eichenberger, Alexandre E.; O' Brien, John K. P.

    2016-12-13

    The present disclosure relates generally to a dedicated memory structure (that is, hardware device) holding data for detecting available worker thread(s) and informing available worker thread(s) of task(s) to execute.

  7. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  8. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  9. Multicore Considerations for Legacy Flight Software Migration

    Science.gov (United States)

    Vines, Kenneth; Day, Len

    2013-01-01

    In this paper we will discuss potential benefits and pitfalls when considering a migration from an existing single core code base to a multicore processor implementation. The results of this study present options that should be considered before migrating fault managers, device handlers and tasks with time-constrained requirements to a multicore flight software environment. Possible future multicore test bed demonstrations are also discussed.

  10. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  11. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  12. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  13. Kokkos' Task DAG Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Harold C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ibanez, Daniel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Task DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.

  14. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  15. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  16. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  17. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  18. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  19. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  20. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  1. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  2. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  3. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  4. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    Science.gov (United States)

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  5. Space station common module power system network topology and hardware development

    Science.gov (United States)

    Landis, D. M.

    1985-01-01

    Candidate power system newtork topologies for the space station common module are defined and developed and the necessary hardware for test and evaluation is provided. Martin Marietta's approach to performing the proposed program is presented. Performance of the tasks described will assure systematic development and evaluation of program results, and will provide the necessary management tools, visibility, and control techniques for performance assessment. The plan is submitted in accordance with the data requirements given and includes a comprehensive task logic flow diagram, time phased manpower requirements, a program milestone schedule, and detailed descriptions of each program task.

  6. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  7. The hardware track finder processor in CMS at CERN

    International Nuclear Information System (INIS)

    Kluge, A.

    1997-07-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS at CERN/Geneva. The task of this processor is to identify muons and to measure their transverse momentum. The Track Finder makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data analysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and to measure their transverse momentum. Each 25 ns a new data set is generated. Measurement of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC. The classical method in high energy physics experiments is to employ a pattern comparison method. The predefined patterns are compared to the found patterns. The high number of data channels and the complex requirements to the spatial detector resolution do not permit to employ a pattern comparison method. A so called track following algorithm was designed, which is able to assemble complete tracks through the whole detector starting from single track segments. Instead of storing a high number of track patterns the problem is brought back to the algorithm level. Comprehensive simulations, employing the hardware simulation language VHDL, were conducted in order to optimize the algorithm and its hardware implementation. A FPGA (field program able gate array)-prototype was designed. A feasibility study to implement the track finder processor employing ASICs was conducted. (author)

  8. Architecture and development of the CDF hardware event builder

    International Nuclear Information System (INIS)

    Shaw, T.M.; Booth, A.W.; Bowden, M.

    1989-01-01

    A hardware Event Builder (EVB) has been developed for use at the Collider Detector experiment at Fermi National Accelerator (CDF). the Event builder presently consists of five FASTBUS modules and has the task of reading out the front end scanners, reformatting the data into YBOS bank structure, and transmitting the data to a Level 3 (L3) trigger system which is composed of multiple VME processing nodes. The Event Builder receives its instructions from a VAX based Buffer Manager (BFM) program via a Unibus Processor Interface (UPI). The Buffer Manager instructs the Event Builder to read out one of the four CDF front end buffers. The Event Builder then informs the Buffer Manager when the event has been formatted and then is instructed to push it up to the L3 trigger system. Once in the L3 system, a decision is made as to whether to write the event to tape

  9. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  10. Trainable hardware for dynamical computing using error backpropagation through physical media.

    Science.gov (United States)

    Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-24

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  11. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  12. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  13. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  14. A Scalable Approach for Hardware Semiformal Verification

    OpenAIRE

    Grimm, Tomas; Lettnin, Djones; Hübner, Michael

    2018-01-01

    The current verification flow of complex systems uses different engines synergistically: virtual prototyping, formal verification, simulation, emulation and FPGA prototyping. However, none is able to verify a complete architecture. Furthermore, hybrid approaches aiming at complete verification use techniques that lower the overall complexity by increasing the abstraction level. This work focuses on the verification of complex systems at the RT level to handle the hardware peculiarities. Our r...

  15. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  16. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  17. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  18. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  19. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  20. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  1. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  2. Human Factors Considerations in Migration of Unmanned Aircraft System (UAS) Operator Control

    National Research Council Canada - National Science Library

    Tvaryanas, Anthony P

    2006-01-01

    ..., or both. There are potential advantages to control migration to include mitigating operator vigilance decrements and fatigue, facilitating operator task specialization, and optimizing workload during multi...

  3. MANAGING MIGRATION: TURKISH PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    İhsan GÜLAY

    2018-04-01

    Full Text Available Conducting migration studies is of vital importance to Turkey, a country which has been experiencing migration throughout history due to its “open doors policy”. The objective of this study is to evaluate the strategic management of migration in Turkey in order to deal with the issue of migration. The main focus of the study is Syrian migrants who sought refuge in Turkey due to the civil war that broke out in their country in April 2011. This study demonstrates the policies and processes followed by Turkey for Syrian migration flow in terms of the social acceptance and harmonisation of the migrants within a democratic environment. The study addresses some statistical facts and issues related to Syrian migration as it has become an integral part of daily life in Turkey. The study also reviews how human rights are protected in the migration process. The study will provide insights for developing sound strategic management policies for the migration issue.

  4. Migration and revolution

    Directory of Open Access Journals (Sweden)

    Nando Sigona

    2012-06-01

    Full Text Available The Arab Spring has not radically transformed migration patterns in the Mediterranean, and the label ‘migration crisis’ does not do justice to the composite and stratified reality.

  5. Population, migration and urbanization.

    Science.gov (United States)

    1982-06-01

    Despite recent estimates that natural increase is becoming a more important component of urban growth than rural urban transfer (excess of inmigrants over outmigrants), the share of migration in the total population growth has been consistently increasing in both developed and developing countries. From a demographic perspective, the migration process involves 3 elements: an area of origin which the mover leaves and where he or she is considered an outmigrant; the destination or place of inmigration; and the period over which migration is measured. The 2 basic types of migration are internal and international. Internal migration consists of rural to urban migration, urban to urban migration, rural to rural migration, and urban to rural migration. Among these 4 types of migration various patterns or processes are followed. Migration may be direct when the migrant moves directly from the village to the city and stays there permanently. It can be circular migration, meaning that the migrant moves to the city when it is not planting season and returns to the village when he is needed on the farm. In stage migration the migrant makes a series of moves, each to a city closer to the largest or fastest growing city. Temporary migration may be 1 time or cyclical. The most dominant pattern of internal migration is rural urban. The contribution of migration to urbanization is evident. For example, the rapid urbanization and increase in urban growth from 1960-70 in the Republic of Korea can be attributed to net migration. In Asia the largest component of the population movement consists of individuals and groups moving from 1 rural location to another. Recently, because urban centers could no longer absorb the growing number of migrants from other places, there has been increased interest in the urban to rural population redistribution. This reverse migration also has come about due to slower rates of employment growth in the urban centers and improved economic opportunities

  6. [Internal migration studies].

    Science.gov (United States)

    Stpiczynski, T

    1986-10-01

    Recent research on internal migration in Poland is reviewed. The basic sources of data, consisting of censuses or surveys, are first described. The author discusses the relationship between migration studies and other sectors of the national economy, and particularly the relationship between migration and income.

  7. Studies of the physico-chemical properties on the transuranian elements in connection with the migration phenomenon in the geosphere. Task 3 Characterization of radioactive waste forms. A series of final reports (1985-89)

    International Nuclear Information System (INIS)

    Vitorge, P.; Oliver, J.; Mangin, J.P.; Billon, A.

    1991-01-01

    In the first part, the properties of the transuranium elements (TRU) mainly Pu Np Am have been investigated in terms of complexation by OH and CO3 ligands in aqueous solution. The species which are formed in given physicochemical conditions (ionic strength, concentrations...) are studied with no direct reference to well defined ground waters. Equilibrium constants of complexes and redox potential values are useful in the drawing up of Pourbaix's diagrams. Areas of stability of species in function of Eh pH pCO2 are given for the different elements in a few cases. The second part of this report deals with the study of the transfer of tritiated water cesium and strontium through highly compacted clay (diffusion coefficient). In a third part a code for the computation of ion migration and diffusion in areas close to radwaste storage facilities has been developed. This type of application was found to require a mesh pattern and boundary conditions different from the usual, which justifies the writing of a new code called CONDIMENT. (Convection and Diffusion of elements) 49 figs.; 18 tabs.; 21 refs

  8. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  9. AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics

    Science.gov (United States)

    2016-09-27

    fold was extracted when applying architecture -aware GPU optimizations, resulting in a 371-fold speed-up. By also leveraging algorithmic innovation...mind the strengths of the underlying hardware architecture . Some examples include a block-sparse linear solver. • Characterization of performance...GPU, NVIDIA GPU, and Intel Xeon Phi . • Creation of a prototypical runtime system called CoreTSAR, short for Core Task-Size Adapting Runtime, that

  10. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  11. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  12. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  13. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  14. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  15. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  16. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  17. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  18. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  19. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  20. List search hardware for interpretive software

    CERN Document Server

    Altaber, Jacques; Mears, B; Rausch, R

    1979-01-01

    Interpreted languages, e.g. BASIC, are simple to learn, easy to use, quick to modify and in general 'user-friendly'. However, a critically time consuming process during interpretation is that of list searching. A special microprogrammed device for fast list searching has therefore been developed at the SPS Division of CERN. It uses bit- sliced hardware. Fast algorithms perform search, insert and delete of a six-character name and its value in a list of up to 1000 pairs. The prototype shows retrieval times of the order of 10-30 microseconds. (11 refs).

  1. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  2. 2D to 3D conversion implemented in different hardware

    Science.gov (United States)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  3. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  4. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Conclusion Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  5. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  6. CASIS Fact Sheet: Hardware and Facilities

    Science.gov (United States)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  7. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  8. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  9. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  10. NCRP soil contamination task group

    International Nuclear Information System (INIS)

    Jacobs, D.G.

    1987-01-01

    The National Council of Radiation Protection and Measurements (NCRP) has recently established a Task Group on Soil Contamination to describe and evaluate the migration pathways and modes of radiation exposure that can potentially arise due to radioactive contamination of soil. The purpose of this paper is to describe the scientific principles for evaluation of soil contamination which can be used as a basis for derivation of soil contamination limits for specific situations. This paper describes scenarios that can lead to soil contamination, important characteristics of soil contamination, the subsequent migration pathways and exposure modes, and the application of principles in the report in deriving soil contamination limits. The migration pathways and exposure modes discussed in this paper include: direct radiation exposure; and exhalation of gases

  11. Software and Hardware control of a hybrid robot for switching between leg-type and wheel-type modes

    OpenAIRE

    Botelho, Wagner Tanaka; Okada, Tokuji; Mahmoud, Abeer; Shimizu, Toshimi

    2011-01-01

    One of the objectives of the paper is to describe the hybrid robot PEOPLER-II (Perpendicularly Oriented Planetary Legged Robot) with regard to switching between leg-type and wheel-type. Our robot has an easier design and control system than other hybrid robots. The software and hardware control in the process of performing five robot tasks are considered. These are the walking, rolling, switching, turning and spinning. In the switching task, we show the control method based on minimization of...

  12. Radon depth migration

    International Nuclear Information System (INIS)

    Hildebrand, S.T.; Carroll, R.J.

    1993-01-01

    A depth migration method is presented that used Radon-transformed common-source seismograms as input. It is shown that the Radon depth migration method can be extended to spatially varying velocity depth models by using asymptotic ray theory (ART) to construct wavefield continuation operators. These operators downward continue an incident receiver-array plane wave and an assumed point-source wavefield into the subsurface. The migration velocity model is constrain to have longer characteristic wavelengths than the dominant source wavelength such that the ART approximations for the continuation operators are valid. This method is used successfully to migrate two synthetic data examples: (1) a point diffractor, and (2) a dipping layer and syncline interface model. It is shown that the Radon migration method has a computational advantage over the standard Kirchhoff migration method in that fewer rays are computed in a main memory implementation

  13. Migration into art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    This book addresses a topic of increasing importance to artists, art historians and scholars of cultural studies, migration studies and international relations: migration as a profoundly transforming force that has remodelled artistic and art institutional practices across the world. It explores...... contemporary art's critical engagement with migration and globalisation as a key source for improving our understanding of how these processes transform identities, cultures, institutions and geopolitics. The author explores three interwoven issues of enduring interest: identity and belonging, institutional...

  14. Mobile Thread Task Manager

    Science.gov (United States)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.

    2013-01-01

    The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.

  15. Overview of job and task analysis

    International Nuclear Information System (INIS)

    Gertman, D.I.

    1984-01-01

    During the past few years the nuclear industry has become concerned with predicting human performance in nuclear power plants. One of the best means available at the present time to make sure that training, procedures, job performance aids and plant hardware match the capabilities and limitations of personnel is by performing a detailed analysis of the tasks required in each job position. The approved method for this type of analysis is referred to as job or task analysis. Job analysis is a broader type of analysis and is usually thought of in terms of establishing overall performance objectives, and in establishing a basis for position descriptions. Task analysis focuses on the building blocks of task performance, task elements, and places them within the context of specific performance requirements including time to perform, feedback required, special tools used, and required systems knowledge. The use of task analysis in the nuclear industry has included training validation, preliminary risk screening, and procedures development

  16. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  17. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  18. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  19. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  20. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  1. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  2. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  3. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  4. Regional Redistribution and Migration

    DEFF Research Database (Denmark)

    Manasse, Paolo; Schultz, Christian

    We study a model with free migration between a rich and a poor region. Since there is congestion, the rich region has an incentive to give the poor region a transfer in order to reduce immigration. Faced with free migration, the rich region voluntarily chooses a transfer, which turns out...... to be equal to that a social planner would choose. Provided migration occurs in equilibrium, this conclusion holds even in the presence of moderate mobility costs. However, large migration costs will lead to suboptimal transfers in the market solution...

  5. Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA

    Directory of Open Access Journals (Sweden)

    Beau Tippetts

    2014-01-01

    Full Text Available A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.

  6. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  7. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  8. Migration from a terrestrial network to a satellite network - Risks/constraints/payoffs

    Science.gov (United States)

    Homon, K. A.

    The migration method is regarded as a straightforward approach to a customer's requirements while addressing the known constraints. The migration plan organizes, controls, and communicates the extensive number of tasks, schedules, technical details, and node-by-node conversion details using the selected migration method as its cornerstone. It is noted that a successful migration plan must also provide the flexibility and robustness necessary to handle the unforeseen changes that will occur over the three- to four-year migration period. A baseline network provides a solid structural underpinning for the migration plan.

  9. Samtidskunst og migration

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    2010-01-01

    "Samtidskunst og migration. En oversigt over faglitteraturen" er en forskningsoversigt der gør status over hvad der hidtil er skrevet inden for det kunsthistoriske område om vor tids billedkunst og migration som politisk, socialt og kulturelt fænomen, primært i forbindelse med immigration til...

  10. The Great Migration.

    Science.gov (United States)

    Trotter, Joe William, Jr.

    2002-01-01

    Describes the migration of African Americans in the United States and the reasons why African Americans migrated from the south. Focuses on issues, such as the effect of World War I, the opportunities offered in the north, and the emergence of a black industrial working class. (CMK)

  11. Migrating Art History

    DEFF Research Database (Denmark)

    Ørum, Tania

    2012-01-01

    Review of Hiroko Ikegami, The Great Migrator. Robert Rauschenberg and the Global Rise of American Art. Cambridge Mass., The MIT Press, 2010. 277 pages. ISBN 978-0-262-01425-0.......Review of Hiroko Ikegami, The Great Migrator. Robert Rauschenberg and the Global Rise of American Art. Cambridge Mass., The MIT Press, 2010. 277 pages. ISBN 978-0-262-01425-0....

  12. College Student Migration.

    Science.gov (United States)

    Fenske, Robert H.; And Others

    This study examines the background characteristics of two large national samples of first-time enrolled freshmen who (a) attended college within their state of residence but away from their home community, (b) migrated to a college in an adjacent state, (c) migrated to a college in a distant state, and (d) attended college in their home community.…

  13. Migration, klima og sundhed

    DEFF Research Database (Denmark)

    Tellier, Siri; Carballo, Manuel

    2009-01-01

    Many tentative connections have been postulated between migration and climate. This article points to rural-urban migration, particularly into low elevation urban slums prone to flooding as an issue needing urgent attention by health professionals. It also notes the no-man's land in which environ...

  14. Migration in Burkina Faso

    NARCIS (Netherlands)

    Wouterse, F.S.

    2007-01-01

    Migration plays an important role in development and as a strategy for poverty reduction. A recent World Bank investigation finds a significant positive relationship between international migration and poverty reduction at the country level (Adams and Page 2003). Burkina Faso, whose conditions for

  15. Migration, Narration, Identity

    DEFF Research Database (Denmark)

    Leese, Peter

    (co-editor with Carly McLaughlin and Wladyslaw Witalisz) This book presents articles resulting from joint research on the representations of migration conducted in connection with the Erasmus Intensive Programme entitled «Migration and Narration» taught to groups of international students over...

  16. HwPMI: An Extensible Performance Monitoring Infrastructure for Improving Hardware Design and Productivity on FPGAs

    Directory of Open Access Journals (Sweden)

    Andrew G. Schmidt

    2012-01-01

    Full Text Available Designing hardware cores for FPGAs can quickly become a complicated task, difficult even for experienced engineers. With the addition of more sophisticated development tools and maturing high-level language-to-gates techniques, designs can be rapidly assembled; however, when the design is evaluated on the FPGA, the performance may not be what was expected. Therefore, an engineer may need to augment the design to include performance monitors to better understand the bottlenecks in the system or to aid in the debugging of the design. Unfortunately, identifying what to monitor and adding the infrastructure to retrieve the monitored data can be a challenging and time-consuming task. Our work alleviates this effort. We present the Hardware Performance Monitoring Infrastructure (HwPMI, which includes a collection of software tools and hardware cores that can be used to profile the current design, recommend and insert performance monitors directly into the HDL or netlist, and retrieve the monitored data with minimal invasiveness to the design. Three applications are used to demonstrate and evaluate HwPMI’s capabilities. The results are highly encouraging as the infrastructure adds numerous capabilities while requiring minimal effort by the designer and low resource overhead to the existing design.

  17. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  18. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Marcello Benedetti

    2017-11-01

    Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  19. The Globalisation of migration

    Directory of Open Access Journals (Sweden)

    Milan Mesić

    2002-04-01

    Full Text Available The paper demonstrates that contemporary international migration is a constitutive part of the globalisation process. After defining the concepts of globalisation and the globalisation of migration, the author discusses six key themes, linking globalisation and international migration (“global cities”, the scale of migration; diversification of migration flows; globalisation of science and education; international migration and citizenship; emigrant communities and new identities. First, in accordance with Saskia Sassen’s analysis, the author rejects the wide-spread notion that unqualified migrants have lost an (important role in »global cities«, i.e. in the centres of the new (global economy. Namely, the post-modern service sector cannot function without the support of a wide range of auxiliary unqualified workers. Second, a critical comparison with traditional overseas mass migration to the USA at the turn of the 19th and 20th centuries indicates that present international migration is, perhaps, less extensive – however it is important to take into consideration various limitations that previously did not exist, and thus the present migration potential is in really greater. Third, globalisation is more evident in a diversification of the forms of migration: the source area of migrants to the New World and Europe has expanded to include new regions in the world; new immigration areas have arisen (the Middle East, new industrial countries of the Far East, South Europe; intra-regional migration has intensified. Forth, globalisation is linked to an increased migration of experts and the pessimistic notion of a brain drain has been replaced by the optimistic idea of a brain gain. Fifth, contemporary international migration has been associated with a crisis of the national model of citizenship. Sixth, the interlinking of (migrant cultural communities regardless of distance and the physical proximity of cultural centres (the

  20. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  1. Malaysia and forced migration

    Directory of Open Access Journals (Sweden)

    Arzura Idris

    2012-06-01

    Full Text Available This paper analyzes the phenomenon of “forced migration” in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants. It considers forced migration as an event hosting multifaceted issues related and relevant to forced migrants and suggests that Malaysia has been preoccupied with the issue of forced migration movements. This is largely seen in various responses invoked from Malaysia due to “south-south forced migration movements.” These responses are, however, inadequate in terms of commitment to the international refugee regime. While Malaysia did respond to economic and migration challenges, the paper asserts that such efforts are futile if she ignores issues critical to forced migrants.

  2. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  3. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  4. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  5. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  6. Environmental Friendly Coatings and Corrosion Prevention For Flight Hardware Project

    Science.gov (United States)

    Calle, Luz

    2014-01-01

    Identify, test and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion preventative compounds (CPC's) for flight hardware an ground support equipment.

  7. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  8. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  9. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  10. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  11. Hardware upgrade for A2 data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Ostrick, Michael; Gradl, Wolfgang; Otte, Peter-Bernd; Neiser, Andreas; Steffen, Oliver; Wolfes, Martin; Koerner, Tito [Institut fuer Kernphysik, Mainz (Germany); Collaboration: A2-Collaboration

    2014-07-01

    The A2 Collaboration uses an energy tagged photon beam which is produced via bremsstrahlung off the MAMI electron beam. The detector system consists of Crystal Ball and TAPS and covers almost the whole solid angle. A frozen-spin polarized target allows to perform high precision measurements of polarization observables in meson photo-production. During the last summer, a major upgrade of the data acquisition system was performed, both on the hardware and the software side. The goal of this upgrade was increased reliability of the system and an improvement in the data rate to disk. By doubling the number of readout CPUs and employing special VME crates with a split backplane, the number of bus accesses per readout cycle and crate was cut by a factor of two, giving almost a factor of two gain in the readout rate. In the course of the upgrade, we also switched most of the detector control system to using the distributed control system EPICS. For the upgraded control system, some new tools were developed to make full use of the capabilities of this decentralised slow control and monitoring system. The poster presents some of the major contributions to this project.

  12. Test system design for Hardware-in-Loop evaluation of PEM fuel cells and auxiliaries

    Energy Technology Data Exchange (ETDEWEB)

    Randolf, Guenter; Moore, Robert M. [Hawaii Natural Energy Institute, University of Hawaii, Honolulu, HI (United States)

    2006-07-14

    In order to evaluate the dynamic behavior of proton exchange membrane (PEM) fuel cells and their auxiliaries, the dynamic capability of the test system must exceed the dynamics of the fastest component within the fuel cell or auxiliary component under test. This criterion is even more critical when a simulated component of the fuel cell system (e.g., the fuel cell stack) is replaced by hardware and Hardware-in-Loop (HiL) methodology is employed. This paper describes the design of a very fast dynamic test system for fuel cell transient research and HiL evaluation. The integration of the real time target (which runs the simulation), the test stand PC (that controls the operation of the test stand), and the programmable logic controller (PLC), for safety and low-level control tasks, into one single integrated unit is successfully completed. (author)

  13. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  14. Semantics-Driven Migration of Java Programs: a Practical Experience

    Directory of Open Access Journals (Sweden)

    Artyom O. Aleksyuk

    2017-01-01

    Full Text Available The purpose of the study is to demonstrate the feasibility of automated code migration to a new set of programming libraries. Code migration is a common task in modern software projects. For example, it may arise when a project should be ported to a more secure or feature-rich library, a new platform or a new version of an already used library. The developed method and tool are based on the previously created by the authors a formalism for describing libraries semantics. The formalism specifies a library behaviour by using a system of extended finite state machines (EFSM. This paper outlines the metamodel designed to specify library descriptions and proposes an easy to use domainspecific language (DSL, which can be used to define models for particular libraries. The mentioned metamodel directly forms the code migration procedure. A process of migration is split into five steps, and each step is also described in the paper. The procedure uses an algorithm based on the breadth- first search extended for the needs of the migration task. Models and algorithms were implemented in the prototype of an automated code migration tool. The prototype was tested by both artificial code examples and a real-world open source project. The article describes the experiments performed, the difficulties that have arisen in the process of migration of test samples, and how they are solved in the proposed procedure. The results of experiments indicate that code migration can be successfully automated. 

  15. Migration and AIDS.

    Science.gov (United States)

    1998-01-01

    This article presents the perspectives of UNAIDS and the International Organization for Migration (IOM) on migration and HIV/AIDS. It identifies research and action priorities and policy issues, and describes the current situation in major regions of the world. Migration is a process. Movement is enhanced by air transport, rising international trade, deregulation of trade practices, and opening of borders. Movements are restricted by laws and statutes. Denial to freely circulate and obtain asylum is associated with vulnerability to HIV infections. A UNAIDS policy paper in 1997 and IOM policy guidelines in 1988 affirm that refugees and asylum seekers should not be targeted for special measures due to HIV/AIDS. There is an urgent need to provide primary health services for migrants, voluntary counseling and testing, and more favorable conditions. Research is needed on the role of migration in the spread of HIV, the extent of migration, availability of health services, and options for HIV prevention. Research must be action-oriented and focused on vulnerability to HIV and risk taking behavior. There is substantial mobility in West and Central Africa, economic migration in South Africa, and nonvoluntary migration in Angola. Sex workers in southeast Asia contribute to the spread. The breakup of the USSR led to population shifts. Migrants in Central America and Mexico move north to the US where HIV prevalence is higher.

  16. Labor migration in Asia.

    Science.gov (United States)

    Martin, P L

    1991-01-01

    "A recent conference sponsored by the United Nations Center for Regional Development (UNCRD) in Nagoya, Japan examined the growing importance of labor migration for four major Asian labor importers (Japan, Hong Kong, Malaysia, and Singapore) and five major labor exporters (Bangladesh, Korea, Pakistan, Philippines, and Thailand).... The conference concluded that international labor migration would increase within Asia because the tight labor markets and rising wages which have stimulated Japanese investment in other Asian nations, for example, have not been sufficient to eliminate migration push and pull forces...." excerpt

  17. Biology task group

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The accomplishments of the task group studies over the past year are reviewed. The purposes of biological investigations, in the context of subseabed disposal, are: an evaluation of the dose to man; an estimation of effects on the ecosystem; and an estimation of the influence of organisms on and as barriers to radionuclide migration. To accomplish these ends, the task group adopted the following research goals: (1) acquire more data on biological accumulation of specific radionuclides, such as those of Tc, Np, Ra, and Sr; (2) acquire more data on transfer coefficients from sediment to organism; (3) Calculate mass transfer rates, construct simple models using them, and estimate collective dose commitment; (4) Identify specific pathways or transfer routes, determine the rates of transfer, and make dose limit calculations with simple models; (5) Calculate dose rates to and estimate irradiation effects on the biota as a result of waste emplacement, by reference to background irradiation calculations. (6) Examine the effect of the biota on altering sediment/water radionuclide exchange; (7) Consider the biological data required to address different accident scenarios; (8) Continue to provide the basic biological information for all of the above, and ensure that the system analysis model is based on the most realistic and up-to-date concepts of marine biologists; and (9) Ensure by way of free exchange of information that the data used in any model are the best currently available

  18. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  19. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  20. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  1. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  2. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  3. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  4. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  5. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  6. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  7. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  8. Neuronal Migration Disorders

    Science.gov (United States)

    ... Understanding Sleep The Life and Death of a Neuron Genes At Work In The Brain Order Publications ... birth defects caused by the abnormal migration of neurons in the developing brain and nervous system. In ...

  9. Migration og etnicitet

    DEFF Research Database (Denmark)

    Christiansen, Connie Carøe

    2004-01-01

    Migration og etnicitet er aktuelle og forbundne fænomener, idet migration øger berøringsfladerne mellem befolkningsgrupper. Etniciteter formes i takt med at grænser drages imellem disse grupper. Imod moderniserings-teoriernes forventning forsvandt etnicitet ikke som en traditionel eller oprindelig...... måde at skabe tilhørsforhold på; globalt set fremstår vor tid istedet som en "migrationens tidsalder", der tilsyneladende også er en tidsalder, hvor kulturelle særtræk, i form af etnicitet, udgør vigtige linjer, hvorefter grupper skilller sig ud fra hinanden. Både migration og etnicitet bringer fokus...... den finder sted i modtagerlandet, men nyere perspektiver på migration, som begreber om medborgerskab, transnationalisme og diaspora er eksponenter for, søger udover den nationalstatslige ramme og inddrager konsekvenserne af migrationen for afsenderlande....

  10. Indonesia's migration transition.

    Science.gov (United States)

    Hugo, G

    1995-01-01

    This article describes population movements in Indonesia in the context of rapid and marked social and economic change. Foreign investment in Indonesia is increasing, and global mass media is available to many households. Agriculture is being commercialized, and structural shifts are occurring in the economy. Educational levels are increasing, and women's role and status are shifting. Population migration has increased over the decades, both short and long distance, permanent and temporary, legal and illegal, and migration to and between urban areas. This article focuses specifically on rural-to-urban migration and international migration. Population settlements are dense in the agriculturally rich inner areas of Java, Bali, and Madura. Although the rate of growth of the gross domestic product was 6.8% annually during 1969-94, the World Bank ranked Indonesia as a low-income economy in 1992 because of the large population size. Income per capita is US $670. Indonesia is becoming a large exporter of labor to the Middle East, particularly women. The predominance of women as overseas contract workers is changing women's role and status in the family and is controversial due to the cases of mistreatment. Malaysia's high economic growth rate of over 8% per year means an additional 1.3 million foreign workers and technicians are needed. During the 1980s urban growth increased at a very rapid rate. Urban growth tended to occur along corridors and major transportation routes around urban areas. It is posited that most of the urban growth is due to rural-to-urban migration. Data limitations prevent an exact determination of the extent of rural-to-urban migration. More women are estimated to be involved in movements to cities during the 1980s compared to the 1970s. Recruiters and middlemen have played an important role in rural-to-urban migration and international migration.

  11. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  12. Avian Alert - a bird migration early warning system

    OpenAIRE

    van Gasteren, H.; Shamoun-Baranes, J.; Ginati, A.; Garofalo, G.

    2008-01-01

    Every year billions of birds migrate from breeding areas to their wintering ranges, some travelling over 10,000 km. Stakeholders interested in aviation flight safety, spread of disease, conservation, education, urban planning, meteorology, wind turbines and bird migration ecology are interested in information on bird movements. Collecting and disseminating useful information about such mobile creatures exhibiting diverse behaviour is no simple task. However, ESA’s Integrated Application Promo...

  13. Implementing the lattice Boltzmann model on commodity graphics hardware

    International Nuclear Information System (INIS)

    Kaufman, Arie; Fan, Zhe; Petkov, Kaloian

    2009-01-01

    Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the

  14. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  15. How to create successful Open Hardware projects — About White Rabbits and open fields

    International Nuclear Information System (INIS)

    Bij, E van der; Arruat, M; Cattin, M; Daniluk, G; Cobas, J D Gonzalez; Gousiou, E; Lewis, J; Lipinski, M M; Serrano, J; Stana, T; Voumard, N; Wlostowski, T

    2013-01-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields

  16. How to create successful Open Hardware projects - About White Rabbits and open fields

    CERN Document Server

    van der Bij, E; Lewis, J; Stana, T; Wlostowski, T; Gousiou, E; Serrano, J; Arruat, M; Lipinski, M M; Daniluk, G; Voumard, N; Cattin, M

    2013-01-01

    CERN's accelerator control group has embraced "Open Hardware" (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  17. Managing migration: the Brazilian case

    OpenAIRE

    Eduardo L. G. Rios-Neto

    2005-01-01

    The objective of this paper is to present the Brazilian migration experience and its relationship with migration management. The article is divided into three parts. First, it reviews some basic facts regarding Brazilian immigration and emigration processes. Second, it focuses on some policy and legal issues related to migration. Finally, it addresses five issues regarding migration management in Brazil.

  18. Monitoring Particulate Matter with Commodity Hardware

    Science.gov (United States)

    Holstius, David

    Health effects attributed to outdoor fine particulate matter (PM 2.5) rank it among the risk factors with the highest health burdens in the world, annually accounting for over 3.2 million premature deaths and over 76 million lost disability-adjusted life years. Existing PM2.5 monitoring infrastructure cannot, however, be used to resolve variations in ambient PM2.5 concentrations with adequate spatial and temporal density, or with adequate coverage of human time-activity patterns, such that the needs of modern exposure science and control can be met. Small, inexpensive, and portable devices, relying on newly available off-the-shelf sensors, may facilitate the creation of PM2.5 datasets with improved resolution and coverage, especially if many such devices can be deployed concurrently with low system cost. Datasets generated with such technology could be used to overcome many important problems associated with exposure misclassification in air pollution epidemiology. Chapter 2 presents an epidemiological study of PM2.5 that used data from ambient monitoring stations in the Los Angeles basin to observe a decrease of 6.1 g (95% CI: 3.5, 8.7) in population mean birthweight following in utero exposure to the Southern California wildfires of 2003, but was otherwise limited by the sparsity of the empirical basis for exposure assessment. Chapter 3 demonstrates technical potential for remedying PM2.5 monitoring deficiencies, beginning with the generation of low-cost yet useful estimates of hourly and daily PM2.5 concentrations at a regulatory monitoring site. The context (an urban neighborhood proximate to a major goods-movement corridor) and the method (an off-the-shelf sensor costing approximately USD $10, combined with other low-cost, open-source, readily available hardware) were selected to have special significance among researchers and practitioners affiliated with contemporary communities of practice in public health and citizen science. As operationalized by

  19. REVEAL: Software Documentation and Platform Migration

    Science.gov (United States)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  20. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  1. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  2. Prestack depth migration

    International Nuclear Information System (INIS)

    Postma, R.W.

    1991-01-01

    Two lines form the southern North Sea, with known velocity inhomogeneities in the overburden, have been pre-stack depth migrated. The pre-stack depth migrations are compared with conventional processing, one with severe distortions and one with subtle distortions on the conventionally processed sections. The line with subtle distortions is also compared with post-stack depth migration. The results on both lines were very successful. Both have already influenced drilling decisions, and have caused a modification of structural interpretation in the respective areas. Wells have been drilled on each of the lines, and well tops confirm the results. In fact, conventional processing led to incorrect locations for the wells, both of which were dry holes. The depth migrated sections indicate the incorrect placement, and on one line reveals a much better drilling location. This paper reports that even though processing costs are high for pre-stack depth migration, appropriate use can save millions of dollars in dry-hole expense

  3. Migration = cloning; aliasiing

    DEFF Research Database (Denmark)

    Hüttel, Hans; Kleist, Josva; Nestmann, Uwe

    1999-01-01

    In Obliq, a lexically scoped, distributed, object-oriented programming language, object migration was suggested as the creation of a copy of an object’s state at the target site, followed by turning the object itself into an alias, also called surrogate, for the remote copy. We consider the creat......In Obliq, a lexically scoped, distributed, object-oriented programming language, object migration was suggested as the creation of a copy of an object’s state at the target site, followed by turning the object itself into an alias, also called surrogate, for the remote copy. We consider...... the creation of object surrogates as an abstraction of the abovementioned style of migration. We introduce Øjeblik, a distribution-free subset of Obliq, and provide three different configuration-style semantics, which only differ in the respective aliasing model. We show that two of the semantics, one of which...... matches Obliq’s implementation, render migration unsafe, while our new proposal for a third semantics is provably safe. Our work suggests a straightforward repair of Obliq’s aliasing model such that it allows programs to safely migrate objects....

  4. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  5. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  6. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  7. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  8. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  9. Records of Migration in the Exoplanet Configurations

    Science.gov (United States)

    Michtchenko, Tatiana A.; Rodriguez Colucci, A.; Tadeu Dos Santos, M.

    2013-05-01

    Abstract (2,250 Maximum Characters): When compared to our Solar System, many exoplanet systems exhibit quite unusual planet configurations; some of these are hot Jupiters, which orbit their central stars with periods of a few days, others are resonant systems composed of two or more planets with commensurable orbital periods. It has been suggested that these configurations can be the result of a migration processes originated by tidal interactions of the planets with disks and central stars. The process known as planet migration occurs due to dissipative forces which affect the planetary semi-major axes and cause the planets to move towards to, or away from, the central star. In this talk, we present possible signatures of planet migration in the distribution of the hot Jupiters and resonant exoplanet pairs. For this task, we develop a semi-analytical model to describe the evolution of the migrating planetary pair, based on the fundamental concepts of conservative and dissipative dynamics of the three-body problem. Our approach is based on an analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces needs to be invoked. We show that, under assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the stationary solutions of the conservative problem (Birkhoff, Dynamical systems, 1966). The ultimate convergence and the evolution of the system along one of these modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the roper frequencies of the system. We show that it is possible to reassemble the starting configurations and migration history of the systems on the basis of their final states, and consequently to constrain the parameters of the physical processes involved.

  10. Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach

    Science.gov (United States)

    Smith, James A.

    2010-01-01

    The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.

  11. En fornemmelse for migration

    DEFF Research Database (Denmark)

    Schütze, Laura Maria

    Afhandlingen undersøger, hvordan sted, museets rolle som aktør og religion er relevante for produktionen af migration på Immigrantmuseet (2012) og i Københavns Museums udstilling At blive københavner (2010). Afhandlingen er baseret på udstillingsanalyse samt interview med relevant museumsfagligt......, anvendes som virkemidler til at nuancere migration og distancere udstillingen fra den offentlige debat om indvandring. Afhandlingen peger på, at produktionen af den nyere danske historie på museum er præget af et fravær af religion. Det skyldes, at de museumsfaglige praksisser og traditioner afspejler en...... identiteter, som vi tager for givet: nationer, byer, kvinder - såvel som migration og religion. Afhandlingen argumenterer følgelig for, at museernes produktion af (materiel) religion er et særdeles relevant, men kun ringe udforsket, genstandsfelt for religionssociologien....

  12. What's driving migration?

    Science.gov (United States)

    Kane, H

    1995-01-01

    During the 1990s investment in prevention of international or internal migration declined, and crisis intervention increased. The budgets of the UN High Commissioner for Refugees and the UN Development Program remained about the same. The operating assumption is that war, persecution, famine, and environmental and social disintegration are inevitable. Future efforts should be directed to stabilizing populations through investment in sanitation, public health, preventive medicine, land tenure, environmental protection, and literacy. Forces pushing migration are likely to increase in the future. Forces include depletion of natural resources, income disparities, population pressure, and political disruption. The causes of migration are not constant. In the past, migration occurred during conquests, settlement, intermarriage, or religious conversion and was a collective movement. Current migration involves mass movement of individuals and the struggle to survive. There is new pressure to leave poor squatter settlements and the scarcities in land, water, and food. The slave trade between the 1500s and the 1800s linked continents, and only 2-3 million voluntarily crossed national borders. Involuntary migration began in the early 1800s when European feudal systems were in a decline, and people sought freedom. Official refugees, who satisfy the strict 1951 UN definition, increased from 15 million in 1980 to 23 million in 1990 but remained a small proportion of international migrants. Much of the mass movement occurs between developing countries. Migration to developed countries is accompanied by growing intolerance, which is misinformed. China practices a form of "population transfer" in Tibet in order to dilute Tibetan nationalism. Colonization of countries is a new less expensive form of control over territory. Eviction of minorities is another popular strategy in Iraq. Public works projects supported by foreign aid displace millions annually. War and civil conflicts

  13. Unix Application Migration Guide

    CERN Document Server

    Microsoft. Redmond

    2003-01-01

    Drawing on the experience of Microsoft consultants working in the field, as well as external organizations that have migrated from UNIX to Microsoft® Windows®, this guide offers practical, prescriptive guidance on the issues you are likely to face when porting existing UNIX applications to the Windows operating system environment. Senior IT decision makers, network managers, and operations managers will get real-world guidance and best practices on planning and implementation issues to understand the different methods through which migration or co-existence can be accomplished. Also detailing

  14. Migrating for a Profession

    DEFF Research Database (Denmark)

    Olwig, Karen Fog

    2015-01-01

    a strong sense of agency and self-empowerment. In the post-WWII period, numerous Caribbean women trained in nursing at British hospitals that have been described as marred by race and gender related inequality and associated forms of exploitation. Yet, the nurses interviewed about this training emphasised......Youths from the Global South migrating for further education often face various forms of discrimination. This Caribbean case study discusses how conditions in the home country can provide a foundation for educational migration that helps the migrants overcome such obstacles and even develop...... in which it enabled these Caribbean women to stake out a new life for themselves....

  15. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  16. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  17. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  18. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  19. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  20. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  1. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-01-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally

  2. Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing

    OpenAIRE

    Rask, Ulf; Mannestig, Pontus

    2002-01-01

    In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market sha...

  3. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  4. Utilizing IXP1200 hardware and software for packet filtering

    OpenAIRE

    Lindholm, Jeffery L.

    2004-01-01

    As network processors have advanced in speed and efficiency they have become more and more complex in both hardware and software configurations. Intel's IXP1200 is one of these new network processors that has been given to different universities worldwide to conduct research on. The goal of this thesis is to take the first step in starting that research by providing a stable system that can provide a reliable platform for further research. This thesis introduces the fundamental hardware of In...

  5. Security challenges and opportunities in adaptive and reconfigurable hardware

    OpenAIRE

    Costan, Victor Marius; Devadas, Srinivas

    2011-01-01

    We present a novel approach to building hardware support for providing strong security guarantees for computations running in the cloud (shared hardware in massive data centers), while maintaining the high performance and low cost that make cloud computing attractive in the first place. We propose augmenting regular cloud servers with a Trusted Computation Base (TCB) that can securely perform high-performance computations. Our TCB achieves cost savings by spreading functionality across two pa...

  6. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  7. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  8. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  9. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  10. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  11. Shielded battery syndrome: a new hardware complication of deep brain stimulation.

    Science.gov (United States)

    Chelvarajah, Ramesh; Lumsden, Daniel; Kaminska, Margaret; Samuel, Michael; Hulse, Natasha; Selway, Richard P; Lin, Jean-Pierre; Ashkan, Keyoumars

    2012-01-01

    Deep brain stimulation hardware is constantly advancing. The last few years have seen the introduction of rechargeable cell technology into the implanted pulse generator design, allowing for longer battery life and fewer replacement operations. The Medtronic® system requires an additional pocket adaptor when revising a non-rechargeable battery such as their Kinetra® to their rechargeable Activa® RC. This additional hardware item can, if it migrates superficially, become an impediment to the recharging of the battery and negate the intended technological advance. To report the emergence of the 'shielded battery syndrome', which has not been previously described. We reviewed our deep brain stimulation database to identify cases of recharging difficulties reported by patients with Activa RC implanted pulse generators. Two cases of shielded battery syndrome were identified. The first required surgery to reposition the adaptor to the deep aspect of the subcutaneous pocket. In the second case, it was possible to perform external manual manipulation to restore the adaptor to its original position deep to the battery. We describe strategies to minimise the occurrence of the shielded battery syndrome and advise vigilance in all patients who experience difficulty with recharging after replacement surgery of this type for the implanted pulse generator. Copyright © 2012 S. Karger AG, Basel.

  12. MRI monitoring of focused ultrasound sonications near metallic hardware.

    Science.gov (United States)

    Weber, Hans; Ghanouni, Pejman; Pascal-Tenorio, Aurea; Pauly, Kim Butts; Hargreaves, Brian A

    2018-07-01

    To explore the temperature-induced signal change in two-dimensional multi-spectral imaging (2DMSI) for fast thermometry near metallic hardware to enable MR-guided focused ultrasound surgery (MRgFUS) in patients with implanted metallic hardware. 2DMSI was optimized for temperature sensitivity and applied to monitor focus ultrasound surgery (FUS) sonications near metallic hardware in phantoms and ex vivo porcine muscle tissue. Further, we evaluated its temperature sensitivity for in vivo muscle in patients without metallic hardware. In addition, we performed a comparison of temperature sensitivity between 2DMSI and conventional proton-resonance-frequency-shift (PRFS) thermometry at different distances from metal devices and different signal-to-noise ratios (SNR). 2DMSI thermometry enabled visualization of short ultrasound sonications near metallic hardware. Calibration using in vivo muscle yielded a constant temperature sensitivity for temperatures below 43 °C. For an off-resonance coverage of ± 6 kHz, we achieved a temperature sensitivity of 1.45%/K, resulting in a minimum detectable temperature change of ∼2.5 K for an SNR of 100 with a temporal resolution of 6 s per frame. The proposed 2DMSI thermometry has the potential to allow MR-guided FUS treatments of patients with metallic hardware and therefore expand its reach to a larger patient population. Magn Reson Med 80:259-271, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  14. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  15. Migration to Alma/Primo: A Case Study of Central Washington University

    Directory of Open Access Journals (Sweden)

    Ping Fu

    2015-12-01

    Full Text Available This paper describes how Central Washington University Libraries (CWUL interacted and collaborated with the Orbis Cascade Alliance (OCA Shared Integrated Library System’s (SILS Implementation Team and Ex Libris to process systems and data migration from Innovative Interfaces Inc.’s Millennium integrated library system to Alma/Primo, Ex Libris’ next-generation library management solution and discovery and delivery solution. A chronological review method was used for this case study to provide an overall picture of key migration events, tasks, and implementation efforts, including pre-migration cleanup, migration forms, integration with external systems, testing, cutover, post-migration cleanup, and reporting and fixing outstanding issues. A three-phase migration model was studied, and a questionnaire was designed to collect data from functional leads to determine staff time spent on the migration tasks. Staff time spent on each phase was analyzed and quantitated, with some top essential elements for the success of the migration identified through the case review and data analysis. An analysis of the Ex Libris’ Salesforce cases created during the migration and post-migration was conducted to be used for identifying roles of key librarians and staff functional leads during the migration.

  16. Migrating the Light

    DEFF Research Database (Denmark)

    Sørensen, Bent

    The migration of Blaga’s universalist, even centralist poems from Romanian of the first third of the 20th C. into American of the first fifth of the 21st C. illustrates the uses of Pierre Joris’s nomadic methods. My translations of Blaga read well for a teenage audience whose only exposure to lit...

  17. Describing migration spatial structure

    NARCIS (Netherlands)

    Rogers, A; Willekens, F; Little, J; Raymer, J

    The age structure of a population is a fundamental concept in demography and is generally depicted in the form of an age pyramid. The spatial structure of an interregional system of origin-destination-specific migration streams is, however, a notion lacking a widely accepted definition. We offer a

  18. Brain Migration Revisited

    Science.gov (United States)

    Vinokur, Annie

    2006-01-01

    The "brain drain/brain gain" debate has been going on for the past 40 years, with irresolvable theoretical disputes and unenforceable policy recommendations that economists commonly ascribe to the lack of reliable empirical data. The recent report of the World Bank, "International migration, remittances and the brain drain", documents the…

  19. Migration and Africa

    DEFF Research Database (Denmark)

    Zoppi, Marco

    2014-01-01

    European powers imposed the nation-state on Africa through colonialism. But even after African independencies, mainstream discourses and government policies have amplified the idea that sedentariness and the state are the only acceptable mode of modernity. Migration is portrayed as a menace...

  20. Migration as Adventure

    DEFF Research Database (Denmark)

    Olwig, Karen Fog

    2018-01-01

    Narratives of adventure constitute a well-established convention of describing travel experiences, yet the significance of this narrative genre in individuals’ accounts of their migration and life abroad has been little investigated. Drawing on Simmel and Bakhtin, among others, this article...

  1. Digitizing migration heritage

    DEFF Research Database (Denmark)

    Marselis, Randi

    2011-01-01

    Museums are increasingly digitizing their collections and making them available to the public on-line. Creating such digital resources may become means for social inclusion. For museums that acknowledge migration history and cultures of ethnic minority groups as important subjects in multiethnic...

  2. The politicisation of migration

    NARCIS (Netherlands)

    van der Brug, W.; D' Amato, G.; Berkhout, J.; Ruedin, D.

    2015-01-01

    Why are migration policies sometimes heavily contested and high on the political agenda? And why do they, at other moments and in other countries, hardly lead to much public debate? The entrance and settlement of migrants in Western Europe has prompted various political reactions. In some countries

  3. Migration pathways in soils

    International Nuclear Information System (INIS)

    Gronow, J.R.

    1986-01-01

    This study looked at diffusive migration through three types of deformation; the projectile pathways, hydraulic fractures of the sediments and faults, and was divided into three experimental areas: autoradiography, the determination of diffusion coefficients and electron microscopy of model projectile pathways in clay. For the autoradiography, unstressed samples were exposed to two separate isotopes, Pm-147 (a possible model for Am behaviour) and the poorly sorbed iodide-125. The results indicated that there was no enhanced migration through deformed kaolin samples nor through fractured Great Meteor East (GME) sediment, although some was evident through the projectile pathways in GME and possibly through the GME sheared samples. The scanning electron microscopy of projectile pathways in clay showed that emplacement of a projectile appeared to have no effect on the orientation of particles at distances greater than two projectile radii from the centre of a projectile pathway. It showed that the particles were not simply aligned with the direction of motion of the projectile but that, the closer to the surface of a particular pathway, the closer the particles lay to their original orientation. This finding was of interest from two points of view: i) the ease of migration of a pollutant along the pathway, and ii) possible mechanisms of hole closure. It was concluded that, provided that there is no advective migration, the transport of radionuclides through sediments containing these defects would not be significantly more rapid than in undeformed sediments. (author)

  4. International Migration of Couples

    DEFF Research Database (Denmark)

    Junge, Martin; Munk, Martin D.; Nikolka, Till

    2018-01-01

    Migrant self-selection is important to labor markets and public finances in both origin and destination countries. We develop a theoretical model regarding the migration of dual-earner couples and test it using population-wide administrative data from Denmark. Our model predicts that the probabil...

  5. PARTICAL SWARM OPTIMIZATION OF TASK SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    Payal Jaglan*, Chander Diwakar

    2016-01-01

    Resource provisioning and pricing modeling in cloud computing makes it an inevitable technology both on developer and consumer end. Easy accessibility of software and freedom of hardware configuration increase its demand in IT industry. It’s ability to provide a user-friendly environment, software independence, quality, pricing index and easy accessibility of infrastructure via internet. Task scheduling plays an important role in cloud computing systems. Task scheduling in cloud computing mea...

  6. Globalization, Migration and Development

    Directory of Open Access Journals (Sweden)

    George, Susan

    2002-01-01

    Full Text Available EnglishMigration may become the most important branch of demography in the earlydecades of the new millennium in a rapidly globalizing world. This paper discusses the causes, costsand benefits of international migration to countries of the South and North, and key issues of commonconcern. International migration is as old as national boundaries, though its nature, volume,direction, causes and consequences have changed. The causes of migration are rooted in the rate ofpopulation growth and the proportion of youth in the population, their education and training,employment opportunities, income differentials in society, communication and transportationfacilities, political freedom and human rights and level of urbanization. Migration benefits the Souththrough remittances of migrants, improves the economic welfare of the population (particularly womenof South countries generally, increases investment, and leads to structural changes in the economy.However, emigration from the South has costs too, be they social or caused by factors such as braindrain. The North also benefits by migration through enhancement of economic growth, development ofnatural resources, improved employment prospects, social development and through exposure toimmigrants' new cultures and lifestyles. Migration also has costs to the North such as of immigrantintegration, a certain amount of destabilization of the economy, illegal immigration, and socialproblems of discrimination and exploitation. Issues common to both North and South include impact onprivate investment, trade, international cooperation, and sustainable development. Both North andSouth face a dilemma in seeking an appropriate balance between importing South's labour or itsproducts and exporting capital and technology from the North.FrenchLa migration est sans doute devenue la partie la plus importante de la démographie des premières décennies du nouveau millénaire dans un monde qui change rapidement. Ce

  7. Migration and Remittances : Recent Developments and Outlook - Transit Migration

    OpenAIRE

    World Bank Group

    2018-01-01

    This Migration and Development Brief reports global trends in migration and remittance flows, as well as developments related to the Global Compact on Migration (GCM), and the Sustainable Development Goal (SDG) indicators for volume of remittances as percentage of gross domestic product (GDP) (SDG indicator 17.3.2), reducing remittance costs (SDG indicator 10.c.1) and recruitment costs (SD...

  8. Task Decomposition in Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory

    2014-06-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  9. THE ROMANIAN MIGRATIONAL EVOLUTION PHENOMENON

    Directory of Open Access Journals (Sweden)

    Cristian Raluca

    2009-05-01

    Full Text Available In our contemporary democratic society the migration phenomenon meets unknown valences in any previous societies. Free will and right to self-determination, much exploited by the XX century society., raised the possibility of interpretation of migration

  10. International migration and the gender

    OpenAIRE

    Koropecká, Markéta

    2010-01-01

    My bachelor thesis explores the connection between international migration and gender. Gender, defined as a social, not a biological term, has a huge impact on the migration process. Statistics and expert studies that have been gender sensitive since 1970s demonstrate that women form half of the amount of the international migrants depending on the world region and representing a wide range of the kinds of international migration: family formation and reunification, labour migration, illegal ...

  11. The commercialization of migration.

    Science.gov (United States)

    Abrera-mangahas, M A

    1989-01-01

    International migration is not new to the Philippines. In the recent outflow of contract workers to the Middle East, there is a shift from individual and family initiated migrations to the more organized, highly commercial variety. While profit-taking intermediaries have played some role in the past, the increase in the number and influence of these intermediaries has altered the story of migration decision-making. In 1975, the signing of the bilateral labor agreement between the governments of Iran and the Philippines signalled the rising demand for Filipino contract workers. From 1970 to 1975, the number of Asian migrant workers in the Gulf countries rose from about 120,000 to 370,000. These figures rose dramatically to 3.3 million in 1985. The growing share of organized and commercialized migration has altered migration decision making. Primarily, intermediaries are able to broaden access to foreign job and high wage opportunities. Commercialization effectively raises the transaction costs for contract migration. Studies on recruitment costs and fees show that self-solicited foreign employment costs less than employment obtained through recruitment agents and intermediaries. The difference in the 2 prices is due, not only to overhead costs of intermediation, but more importantly to the rent exacted by agents from having job information and placement rights. In the Philippines in October 1987 the average placement fee was P8000, greatly exceeding the mandated maximum fee level of P5000. This average is understated because the computation includes the 17% who do not pay any fees. The widespread and popular view of recruitment intermediaries is negative, dominated by images of abuses and victims. Private intermediaries and the government bureaucracy need each other. Intermediaries need government; their consistent demand for incentives and protection is indicative. On the other hand, government expands its supervision of control of overseas employment via the

  12. Youth Labor Migration in Nepal

    OpenAIRE

    Bossavie, Laurent; Denisova, Anastasiya

    2018-01-01

    This descriptive study investigates internal and external labor migration by Nepalese youth. External labor migration is separated into the flow to India, which is unregulated, and the flow to other countries, which typically takes the form of temporary contract migration to countries with bilateral labor agreements with Nepal (referred to in Nepal as foreign employment). The study finds t...

  13. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  14. Musei del migration heritage / Migration heritage museums

    Directory of Open Access Journals (Sweden)

    Patrizia Dragoni

    2015-01-01

    Since the second half of the 1960s of the 20th century, a profound cultural innovation was accompanied to the radical change in the social, political and economic climate. The anthropological notion of culture as opposed to idealistic vision, the unusual and strong interest in material culture, the enunciation of the concept of cultural property by the Franceschini Commission, the luck of the Public History bring a change of the disciplinary statutes of historical sciences, which begin to attend to social history, focusing on the spontaneous sources of information and initiating experiences of oral history. To all this a remarkable transformation of the themes and of the social function of museums is added. This paper illustrates, in relation to this more general context, the foundation and the dissemination of museums dedicated to the history of migration in Italy and in the world, enunciates their possible social utility for the integration of present migrants in Italy and illustrates, by way of example, the museum recently opened in Recanati.

  15. GOSH! A roadmap for open-source science hardware

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  16. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  17. Religion, migration og integration

    DEFF Research Database (Denmark)

    Borup, Jørn

    2010-01-01

    Sammenhængen mellem religion og integration har de sidste år været genstand for debat. Artiklen kommer ind på begreber og sammenhænge relateret til området (migration, diaspora, assimilation, etnicitet, kultur) og ser på religionens mulige rolle som negativ eller positiv ressource i integrationss......Sammenhængen mellem religion og integration har de sidste år været genstand for debat. Artiklen kommer ind på begreber og sammenhænge relateret til området (migration, diaspora, assimilation, etnicitet, kultur) og ser på religionens mulige rolle som negativ eller positiv ressource i...

  18. Grain boundary migration

    International Nuclear Information System (INIS)

    Dimitrov, O.

    1975-01-01

    Well-established aspects of grain-boundary migration are first briefly reviewed (influences of driving force, temperature, orientation and foreign atoms). Recent developments of the experimental methods and results are then examined, by considering the various driving of resistive forces acting on grain boundaries. Finally, the evolution in the theoretical models of grain-boundary motion is described, on the one hand for ideally pure metals and, on the other hand, in the presence of solute impurity atoms [fr

  19. Ventriculoperitoneal Shunt Migration

    Directory of Open Access Journals (Sweden)

    Justin P Puller

    2017-01-01

    Full Text Available History of present illness: A 40-year-old female presented to our ED with left upper abdominal pain and flank pain. The pain had begun suddenly 2 hours prior when she was reaching into a freezer to get a bag of frozen vegetables. She described the pain as sharp, constant, severe, and worse with movements and breathing. The pain radiated to the left shoulder. On review of systems, the patient had mild dyspnea and nausea. She denied fever, chills, headache, vision changes, vomiting, or urinary symptoms. Her medical history was notable for obstructive sleep apnea, gastroesophageal reflux disease, arthritis, fibromyalgia, depression, obesity, and idiopathic intracranial hypertension. For the latter, she had a VP (ventriculoperitoneal shunt placed 14 years prior to this visit. She had a history of 2 shunt revisions, the most recent 30 days before this ED visit. Significant findings: An immediate post-op abdominal x-ray performed after the patient’s VP shunt revision 30 days prior to this ED visit reveals the VP shunt tip in the mid abdomen. A CT of the abdomen performed on the day of the ED visit reveals the VP shunt tip interposed between the spleen and the diaphragm. Discussion: VP shunts have been reported to migrate to varied locations in the thorax and abdomen. Incidence of abdominal complications of VP shunt placement ranges from 10%-30%, and can include pseudocyst formation, migration, peritonitis, CSF ascites, infection, and viscus perforation. Incidence of distal shunt migration is reported as 10%, and most previously reported cases occurred in pediatric patients.1 A recent retrospective review cited BMI greater than thirty and previous shunt procedure as risk factors for distal shunt migration.2 The patient in the case presented had a BMI of 59 and 3 previous shunt procedures.

  20. Surface migration in sorption processes

    International Nuclear Information System (INIS)

    Rasmuson, A.; Neretnieks, J.

    1983-03-01

    Diffusion rates of sorbing chemical species in granites and clays are in several experiments within the KBS study, higher than can be explained by pore diffusion only. One possible additional transport mechanism is transport of of sorbed molecules/ions along the intrapore surfaces. As a first step a literature investigation on on solid surfaces has been conducted. A lot of experimental evidence of the mobility of the sorbed molecules has been gathered through the years, particulary for metal surfaces and chemical engineering systems. For clays however, there are only a few articles, and for granites none. Two types of surface migration models have been proposed in the litterature: i) Surface flow as a result of a gradient in spreading pressure. ii) Surface diffusion as a result of a gradient in concentration. The surface flow model has only been applied to gaseous systems. However, it should be equally applicable to liquid systems. The models i) and ii) are conceptually very different. However, the resulting expressions for surface flux are complicated and it will not be an easy task to distinguish between them. There seem to be three ways of discriminating between the transport mechanisms: a) Temperature dependence. b) Concentration dependence. c) Order of magnitude. (Forf)

  1. Conservation physiology of animal migration

    Science.gov (United States)

    Lennox, Robert J.; Chapman, Jacqueline M.; Souliere, Christopher M.; Tudorache, Christian; Wikelski, Martin; Metcalfe, Julian D.; Cooke, Steven J.

    2016-01-01

    Migration is a widespread phenomenon among many taxa. This complex behaviour enables animals to exploit many temporally productive and spatially discrete habitats to accrue various fitness benefits (e.g. growth, reproduction, predator avoidance). Human activities and global environmental change represent potential threats to migrating animals (from individuals to species), and research is underway to understand mechanisms that control migration and how migration responds to modern challenges. Focusing on behavioural and physiological aspects of migration can help to provide better understanding, management and conservation of migratory populations. Here, we highlight different physiological, behavioural and biomechanical aspects of animal migration that will help us to understand how migratory animals interact with current and future anthropogenic threats. We are in the early stages of a changing planet, and our understanding of how physiology is linked to the persistence of migratory animals is still developing; therefore, we regard the following questions as being central to the conservation physiology of animal migrations. Will climate change influence the energetic costs of migration? Will shifting temperatures change the annual clocks of migrating animals? Will anthropogenic influences have an effect on orientation during migration? Will increased anthropogenic alteration of migration stopover sites/migration corridors affect the stress physiology of migrating animals? Can physiological knowledge be used to identify strategies for facilitating the movement of animals? Our synthesis reveals that given the inherent challenges of migration, additional stressors derived from altered environments (e.g. climate change, physical habitat alteration, light pollution) or interaction with human infrastructure (e.g. wind or hydrokinetic turbines, dams) or activities (e.g. fisheries) could lead to long-term changes to migratory phenotypes. However, uncertainty remains

  2. Many Faces of Migrations

    Directory of Open Access Journals (Sweden)

    Milica Antić Gaber

    2013-12-01

    We believe that in the present thematic issue we have succeeded in capturing an important part of the modern European research dynamic in the field of migration. In addition to well-known scholars in this field several young authors at the beginning their research careers have been shortlisted for the publication. We are glad of their success as it bodes a vibrancy of this research area in the future. At the same time, we were pleased to receive responses to the invitation from representatives of so many disciplines, and that the number of papers received significantly exceeded the maximum volume of the journal. Recognising and understanding of the many faces of migration are important steps towards the comprehensive knowledge needed to successfully meet the challenges of migration issues today and even more so in the future. It is therefore of utmost importance that researchers find ways of transferring their academic knowledge into practice – to all levels of education, the media, the wider public and, of course, the decision makers in local, national and international institutions. The call also applies to all authors in this issue of the journal.

  3. Migration Process Evolution in Europe

    Directory of Open Access Journals (Sweden)

    Carmen Tudorache

    2006-08-01

    Full Text Available The migration phenomenon has always existed, fluctuating by the historic context, the economic, political, social and demographic disparities between the Central and East European countries and the EU Member States, the interdependencies between the origin and receiving countries and the European integration process evolutions. In the European Union, an integrated and inclusive approach of the migration issue is necessary. But a common policy on migration rests an ambitious objective. A common approach of the economic migration management and the harmonization of the migration policies of the Member States represented a challenge for the European Union and will become urgent in the future, especially due to the demographic ageing.

  4. Migrations in Slovenian geography textbooks

    Directory of Open Access Journals (Sweden)

    Jurij Senegačnik

    2013-12-01

    Full Text Available In Slovenia, the migrations are treated in almost all geographical textbooks for different levels of education. In the textbooks for the elementary school from the sixth to ninth grade, students acquire knowledge of the migrations by the inductive approach. Difficulty level of treatment and quantity of information are increasing by the age level. In the grammar school program a trail of gaining knowledge on migration is deductive. Most attention is dedicated to migrations in general geography textbooks. The textbooks for vocational and technical school programs deal with migrations to a lesser extent and with different approaches.

  5. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  6. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  7. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2017-01-01

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  8. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  9. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  10. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  11. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  12. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  13. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  14. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  15. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  16. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  17. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  18. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  19. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  20. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  1. SiMA: A simplified migration assay for analyzing neutrophil migration.

    Science.gov (United States)

    Weckmann, Markus; Becker, Tim; Nissen, Gyde; Pech, Martin; Kopp, Matthias V

    2017-07-01

    In lung inflammation, neutrophils are the first leukocytes migrating to an inflammatory site, eliminating pathogens by multiple mechanisms. The term "migration" describes several stages of neutrophil movement to reach the site of inflammation, of which the passage of the interstitium and basal membrane of the airway are necessary to reach the site of bronchial inflammation. Currently, several methods exist (e.g., Boyden Chamber, under-agarose assay, or microfluidic systems) to assess neutrophil mobility. However, these methods do not allow for parameterization on single cell level, that is, the individual neutrophil pathway analysis is still considered challenging. This study sought to develop a simplified yet flexible method to monitor and quantify neutrophil chemotaxis by utilizing commercially available tissue culture hardware, simple video microscopic equipment and highly standardized tracking. A chemotaxis 3D µ-slide (IBIDI) was used with different chemoattractants [interleukin-8 (IL-8), fMLP, and Leukotriene B4 (LTB 4 )] to attract neutrophils in different matrices like Fibronectin (FN) or human placental matrix. Migration was recorded for 60 min using phase contrast microscopy with an EVOS ® FL Cell Imaging System. The images were normalized and texture based image segmentation was used to generate neutrophil trajectories. Based on these spatio-temporal information a comprehensive parameter set is extracted from each time series describing the neutrophils motility, including velocity and directness and neutrophil chemotaxis. To characterize the latter one, a sector analysis was employed enabling the quantification of the neutrophils response to the chemoattractant. Using this hard- and software framework we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications (Prednisolone). Additionally, a comparison of four asthmatic and

  2. Real-time scheduling of software tasks

    International Nuclear Information System (INIS)

    Hoff, L.T.

    1995-01-01

    When designing real-time systems, it is often desirable to schedule execution of software tasks based on the occurrence of events. The events may be clock ticks, interrupts from a hardware device, or software signals from other software tasks. If the nature of the events, is well understood, this scheduling is normally a static part of the system design. If the nature of the events is not completely understood, or is expected to change over time, it may be necessary to provide a mechanism for adjusting the scheduling of the software tasks. RHIC front-end computers (FECs) provide such a mechanism. The goals in designing this mechanism were to be as independent as possible of the underlying operating system, to allow for future expansion of the mechanism to handle new types of events, and to allow easy configuration. Some considerations which steered the design were programming paradigm (object oriented vs. procedural), programming language, and whether events are merely interesting moments in time, or whether they intrinsically have data associated with them. The design also needed to address performance and robustness tradeoffs involving shared task contexts, task priorities, and use of interrupt service routine (ISR) contexts vs. task contexts. This paper will explore these considerations and tradeoffs

  3. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  4. Labour migration from Turkey.

    Science.gov (United States)

    Uner, S

    1988-01-01

    This study is concerned with Turkish labor migration to Western Europe. Earlier and recent patterns of labor migration, characteristics of migrants by occupation, area of destination, and by geographical origins are discussed. Economic and demographic consequences of labor migration are also analyzed. It is estimated that Turkey's population will reach 73 million at the year 2000 with the present growth rate of 2.48% annually. Considering the efforts made to slow down the present high fertility rates and assuming that the decrease in labor force participation during 1970-1980 continues, the author concludes that the labor supply will increase with a growth rate of 2% annually for the next 13-15 years. Thus, the labor supply will reach 26.6 million people in the year 2000 from the 1980 level of 17.8 million. Assuming also that the income/employment elasticity of .25 which was observed throughout the period of 1960-1980 will not change until 2000, the annual growth rate of employment may be estimated as 1.5%. Thus, the number of people employed will reach 20 million in the year 1990 and 23.2 million in the year 2000. 8.8 million people will join the labor market as new entrants between 1980 and 2000. Only 6 million people out of 8.8 million will be employed. Thus, in the year 2000, it is estimated that 2.8 million new unemployed people will be added to the already open unemployment figure 1980 census data give the number of unemployed as .6 million people. Adding the 2.8 million new unemployed to this figure totals 3.4 million unemployed in 2000. The State Planning Organization's estimate of labor surplus for 1980 was 2.5 million people. When 2.8 million unemployed people are added to this figure, the labor surplus for the year 2000 reaches 5.3 million people.

  5. ILO - International Migration Programme.

    Science.gov (United States)

    Boudraa, Miriam

    2011-01-01

    In a wide International Context characterised not only by the economical development but also by the social, cultural, political and individual development, we witness more and more to a exchange between the developed and the developing countries, which can be translated especially in the migration of the work force. In theory, all countries are either countries of origin either countries of transit or destination, and they are all responsible for the rights of migrant workers by promoting the rights, by monitoring and by preventing the abusive conditions. The process of migration of the workforce can be divided into three stages: the first coincides with the period prior to departure, the second is represented by the aftermath of the departure and the period of stay in the country of destination, the third stage corresponds to the return in the country of origin. The workers must be protected throughout this process by the international organizations that perform the catalytic role of communication and exchange between countries, for the only purpose of protecting the rights of immigrant and/or immigrants workers. The responsibility for the protection of workers is divided among the various players in the International Labour Organisation. Every country has to apply measures according to the international standards regarding workers' rights, standards that guide the various countries in the formulation and implementation of their policies and legislation. These standards are suggested by International Conventions, the ILO Conventions and other international instruments such as the human rights instrument. There has been a big step forward once the ILO Fundamental Conventions and Conventions on Migrant Workers where implemented and this implementation represented the use of the Guidelines "ILO Multilateral Framework on Labour Migration".

  6. Urbanization, Migration, Information

    Directory of Open Access Journals (Sweden)

    Konstantin Lidin

    2016-05-01

    Full Text Available In the contemporary world urbanization becomes a large-scale process. Huge flows of people migrate from poorer districts to the cities with a higher level of consumption. It takes migrants about 15-25 years to give up their traditional ascetic way of life. In this period the ‘new citizens’ try to arrange compact settlements with an archaic way of life, insanitary conditions, high criminogenity and an authoritative local self-government. The processes of formation and decay of the ascetic enclave are viewed through the example of the ‘Shanghai’ trading neighborhood in Irkutsk.

  7. Neuronal Migration and Neuronal Migration Disorder in Cerebral Cortex

    OpenAIRE

    SUN, Xue-Zhi; TAKAHASHI, Sentaro; GUI, Chun; ZHANG, Rui; KOGA, Kazuo; NOUYE, Minoru; MURATA, Yoshiharu

    2002-01-01

    Neuronal cell migration is one of the most significant features during cortical development. After final mitosis, neurons migrate from the ventricular zone into the cortical plate, and then establish neuronal lamina and settle onto the outermost layer, forming an "inside-out" gradient of maturation. Neuronal migration is guided by radial glial fibers and also needs proper receptors, ligands, and other unknown extracellular factors, requests local signaling (e.g. some emitted by the Cajal-Retz...

  8. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  9. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  10. Hardware and software techniques for boiler operation and management

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Hiroshi (Hirakawa Iron Works, Ltd., Osaka (Japan))

    1989-04-01

    A study was conducted on the requirements for easy-operable boiler from the view points of hardware and software technologies. Relation among efficiency, energy-saving, and economics, and control of total emission regarding low NOx operation, were explained, with suggestion of orientation to developed necessary hard- and soft- ware for the realization. 8 figs.

  11. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper presents the PACE partitioning algorithm which is used in the LYCOS co-synthesis system for partitioning control/dataflow graphs into hardware and software parts. The algorithm is a dynamic programming algorithm which solves both the problem of minimizing system execution time...

  12. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  13. Hardware, Languages, and Architectures for Defense Against Hostile Operating Systems

    Science.gov (United States)

    2015-05-14

    complex instruction sets. The scale of this problem is multiplied by the diversity of hardware platforms in deployment today. We developed a novel approach...www.seclab.cs.sunysb.edu/seclab/lbc/. Professor King has been invited to and has given lectures at the NSA, Sandia, DARPA, Intel, Microsoft, Samsung

  14. Hardware prototype with component specification and usage description

    NARCIS (Netherlands)

    Azam, Tre; Aswat, Soyeb; Klemke, Roland; Sharma, Puneet; Wild, Fridolin

    2017-01-01

    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecture taking into account the current limitations of the Hololens not being available until early next month in time for

  15. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  16. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  17. Another way of doing RSA cryptography in hardware

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Honary, B.

    2001-01-01

    In this paper we describe an efficient and secure hardware implementation of the RSA cryptosystem. Modular exponentiation is based on Montgomery’s method without any modular reduction achieving the optimal bound. The presented systolic array architecture is scalable in severalparameters which makes

  18. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  19. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  20. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  1. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  2. Lab at Home: Hardware Kits for a Digital Design Lab

    Science.gov (United States)

    Oliver, J. P.; Haim, F.

    2009-01-01

    An innovative laboratory methodology for an introductory digital design course is presented. Instead of having traditional lab experiences, where students have to come to school classrooms, a "lab at home" concept is proposed. Students perform real experiments in their own homes, using hardware kits specially developed for this purpose. They…

  3. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  4. Enabling Self-Organization in Embedded Systems with Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Christophe Bobda

    2009-01-01

    Full Text Available We present a methodology based on self-organization to manage resources in networked embedded systems based on reconfigurable hardware. Two points are detailed in this paper, the monitoring system used to analyse the system and the Local Marketplaces Global Symbiosis (LMGS concept defined for self-organization of dynamically reconfigurable nodes.

  5. Generalized Distance Transforms and Skeletons in Graphics Hardware

    NARCIS (Netherlands)

    Strzodka, R.; Telea, A.

    2004-01-01

    We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the

  6. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  7. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  8. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  9. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  10. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  11. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  12. Hiding State in CλaSH Hardware Descriptions

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Baaij, C.P.R.; Kuper, Jan; Kooijman, Matthijs

    Synchronous hardware can be modelled as a mapping from input and state to output and a new state, such mappings are referred to as transition functions. It is natural to use a functional language to implement transition functions. The CaSH compiler is capable of translating transition functions to

  13. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  14. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  15. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  16. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  17. Improving Reliability, Security, and Efficiency of Reconfigurable Hardware Systems (Habilitation)

    NARCIS (Netherlands)

    Ziener, Daniel

    2017-01-01

    In this treatise,  my research on methods to improve efficiency, reliability, and security of reconfigurable hardware systems, i.e., FPGAs, through partial dynamic reconfiguration is outlined. The efficiency of reconfigurable systems can be improved by loading optimized data paths on-the-fly on an

  18. Evaluation of In-House versus Contract Computer Hardware Maintenance

    International Nuclear Information System (INIS)

    Wright, H.P.

    1981-09-01

    The issue of In-House versus Contract Computer Hardware Maintenance is one which every organization who uses computers must resolve. This report discusses the advantages and disadvantages of both approaches to computer maintenance, the costs involved (based on the current AGNS computer inventory), and the AGNS maintenance experience to date. A recommendation on an appropriate approach for AGNS is made

  19. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  20. Detection of hardware backdoor through microcontroller read time ...

    African Journals Online (AJOL)

    The objective of this work, christened “HABA” (Hardware Backdoor Aware) is to collect data samples of series of read time of microcontroller embedded on military grade equipments and correlate it with previously stored expected behavior read time samples so as to detect abnormality or otherwise. I was motivated by the ...

  1. Hardware Transactional Memory Optimization Guidelines, Applied to Ordered Maps

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal; Probst, Christian W.; Karlsson, Sven

    2015-01-01

    efficiently requires reasoning about those differences. In this paper we present 5 guidelines for applying hardware transactional memory efficiently, and apply the guidelines to BT-trees, a concurrent ordered map. Evaluating BT-trees on standard benchmarks shows that they are up to 5.3 times faster than...

  2. A hardware architecture for real-time shadow removal in high-contrast video

    Science.gov (United States)

    Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel

    2017-09-01

    Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.

  3. Autonomous target tracking of UAVs based on low-power neural network hardware

    Science.gov (United States)

    Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe

    2014-05-01

    Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.

  4. Pre-Flight Tests with Astronauts, Flight and Ground Hardware, to Assure On-Orbit Success

    Science.gov (United States)

    Haddad Michael E.

    2010-01-01

    On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit or on the Lunar surface. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g/vacuum environment of space or low-g/vacuum environment on the Lunar/Mars Surface. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit/on Lunar/Mars surface before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit/Lunar/Mars surface operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.

  5. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    Science.gov (United States)

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  6. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  7. Network migration for printers

    CERN Multimedia

    2016-01-01

    Further to the recent General Purpose (office) Network reorganisation (as announced in the Bulletin - see here), please note that the majority of print devices will be automatically migrated to the new network IP address range on Tuesday 27 September.   This change should be transparent for these devices and therefore end-users, provided you have installed the printers from the Print Service website. A small number of devices will require manual intervention from the Printer Support team in order to migrate correctly. These devices will not change their IP address until the manual intervention, which will be carried out before Monday 3rd October. However, if you have mistakenly connected directly to the printer’s IP address, then your printing will be affected – please uninstall the printer (for help, see: KB3785), and re-install it from the Print Service website (or follow instructions for visitor machines). Please do this as soon as possible in order to avoid printing issues, t...

  8. Brain inspired hardware architectures - Can they be used for particle physics ?

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    After their inception in the 1940s and several decades of moderate success, artificial neural networks have recently demonstrated impressive achievements in analysing big data volumes. Wide and deep network architectures can now be trained using high performance computing systems, graphics card clusters in particular. Despite their successes these state-of-the-art approaches suffer from very long training times and huge energy consumption, in particular during the training phase. The biological brain can perform similar and superior classification tasks in the space and time domains, but at the same time exhibits very low power consumption, rapid unsupervised learning capabilities and fault tolerance. In the talk the differences between classical neural networks and neural circuits in the brain will be presented. Recent hardware implementations of neuromorphic computing systems and their applications will be shown. Finally, some initial ideas to use accelerated neural architectures as trigger processors i...

  9. Development of Soft-Hardware Platform for Training System Design of Electrotechnical Complexes and Electric Drives

    Directory of Open Access Journals (Sweden)

    Koltunova Ekaterina A.

    2017-01-01

    Full Text Available The article presents the results of the development of software and hardware platform as the equipment for the training of children and youth work skills with robotics, allowing in the future to apply this knowledge in practice, implementing automation system for home use. We consider the problems of existing solutions. The main difference is the integration of the proposed fees and extensions into a single set by connecting the connectors and the ability to connect third-party components from different manufacturers, without limiting users. As well as a simplified method using a visual object-oriented programming allows you to immediately engage in the work. Prepared lessons and tasks in the game style simplifies the information and allows you to understand how you can apply one or another technical solution.

  10. Cognitive task analysis

    NARCIS (Netherlands)

    Schraagen, J.M.C.

    2000-01-01

    Cognitive task analysis is defined as the extension of traditional task analysis techniques to yield information about the knowledge, thought processes and goal structures that underlie observable task performance. Cognitive task analyses are conducted for a wide variety of purposes, including the

  11. MIGRATION AND ITS ENVIROMENTAL EFFECTS

    OpenAIRE

    Rashid SAEED; Rana NADIR IDREES; Humna IJAZ; Marriam FURQANI; Raziya NADEEM

    2012-01-01

    Migration can be ongoing shifting of a particular person from one location to another. The reason of shifting depends on selected thought deficiency, shock, difficulties, hopes, enthusiasm. Case study ended up recognizing the extent to which in turn migration can be relying on the specifics especially natural environment. This particular document expects to research the actual linkages between the atmosphere as well as migration using secondary data. Lots of investigation may be completed wit...

  12. Nordic Migration and Integration Research

    OpenAIRE

    Pyrhönen, Niko; Martikainen, Tuomas; Leinonen, Johanna

    2017-01-01

    EXECUTIVE SUMMARY Migration and integration are currently highly contentious topics in political, public and scientific arenas, and will remain so in the near future. However, many common migration-related prejudices and inefficien¬cies in the integration of the migrant population are due to the lack of sound, tested and accessible scientific research. Therefore, the study of migration – by developing basic research and by properly resourcing novel methodological approaches and interventions ...

  13. The challenges of managing migration

    Energy Technology Data Exchange (ETDEWEB)

    Tacoli, Cecilia

    2005-10-15

    Migration and urbanisation are driven by economic growth and social change, but also by deepening inequalities. Managing migration should not be equated with curbing it, as this inevitably reduces migrants' rights. But managing population movement whilst respecting the rights of migrants and nonmigrants, supporting the contribution of migration to poverty reduction and economic growth in sending and receiving areas and reducing the human and material costs of movement means that fundamental challenges need to be addressed.

  14. Countering inbreeding with migration 1. Migration from unrelated ...

    African Journals Online (AJOL)

    Ret:ieved 6 Octoher 1991; ut:cepted I8 Mur- 1995. The eff'ect of migration on inbreeding is moclelled fbr small populations with immigrants from a large unrelated population. Different migration rates and numbers fbr the two sexes are assumed, and a general recursion equation for inbreeding progress derived, which can ...

  15. Migration and regional inequality

    DEFF Research Database (Denmark)

    Peng, Lianqing; Swider, Sarah

    2017-01-01

    Scholars studying economic inequality in China have maintained that regional inequality and economic divergence across provinces have steadily increased over the past 30 years. New studies have shown that this trend is a statistical aberration; calculations show that instead of quickly and sharply...... rising, regional inequality has actually decreased, and most recently, remained stable. Our study suggests that China’s unique migratory regime is crucial to understanding these findings. We conduct a counterfactual simulation to demonstrate how migration and remittances have mitigated income inequality...... across provinces in order to show that without these processes, we would have seen more of a rise in interprovincial income inequality. We conclude by arguing that inequality in China is still increasing, but it is changing and becoming less place-based. As regional inequality decreases, there are signs...

  16. Tracking migrating birds

    DEFF Research Database (Denmark)

    Willemoes, Mikkel

    habitats with those in rural habitats. Some species have decreased the frequency of migrants and migration distance in urban environments, and others have not. The other manuscript describes the small scale movements of three different Palaearctic migrants during winter in Africa in a farmland habitat....... In another species, environmental conditions are not a good predictor of movements, and possibly effects of timing constraints or food type play a role. Two manuscripts focus on the effects of human-induced habitat alterations on migratory behaviour. One compares the movements of partial migrants in urban...... and a forest reserve. In the degraded habitat all species used more space, although the consequence on bird density is less clear. Two manuscripts relate the migratory movements of a long-distance migrant with models of navigation. One compares model predictions obtained by simulation with actual movements...

  17. Making Migration Meaningful

    DEFF Research Database (Denmark)

    Benwell, Ann Fenger

    2013-01-01

    a way to escape family patriarchy and conformity, and can contribute to loss, hardship, and uncertainty for family members left behind. Further, mobility provides opportunities and a means to escape the stigma of ‘laziness’ culturally associated with poverty and immobility. Postsocialist separation has...... of absence by migrant family members, as both men and women are culturally permitted to be separate from their families. Migration is understood to contribute to prosperity, and separations contribute to generate growth and hishig (good fortune) for the good of the family. However, such mobility is also......Mongolia has experienced two decades since the demise of the Soviet Union and has implemented strategies to strengthen its economy and its democratic practices. Transitions from being a nomadic society to a Soviet satellite state and onwards to liberal democracy have greatly impacted family life...

  18. Nightly Test system migration

    CERN Document Server

    Win-Lime, Kevin

    2013-01-01

    The summer student program allows students to participate to the Cern adventure. They can follow several interesting lectures about particle science and participate to the experiment work. As a summer student, I had worked for LHCb experiment. LHCb uses a lot of software to analyze its data. All this software is organized in packages and projects. They are built and tested during the night using an automated system and the results are displayed on a web interface. Actually, LHCb is changing this system. It is looking for a replacement candidate. So I was charged to unify some internal interfaces to permit a swift migration. In this document, I will describe shortly the system used by LHCb, then I will explain what I have done in detail.

  19. Computerized management of plant intervention tasks

    International Nuclear Information System (INIS)

    Quoidbach, G.

    2004-01-01

    The main objective of the 'Computerized Management of Plant Intervention Tasks' is to help the staff of a nuclear or a conventional power plant or of any other complex industrial facility (chemical industries, refineries, and so on) in planning, organizing, and carrying out any (preventive or corrective) maintenance task. This 'Computerized Management of Plant Intervention Tasks' is organized around a data base of all plant components in the facility that might be subjected to maintenance or tagout. It allows to manage, by means of intelligent and configurable 'mail service', the course of the intervention requests as well as various treatments of those requests, in a safe and efficient way, adapted to each particular organization. The concept of 'Computerized Management' of plant intervention tasks was developed by BELGATOM in 1983 for the Belgian nuclear power plants of ELECTRABEL. A first implementation of this concept was made at that time for the Doel NPP under the name POPIT (Programming Of Plant Intervention Tasks). In 1988, it was decided to proceed to a functional upgrade of the application, using advanced software and hardware techniques and products, and to realize a second implementation in the Tihange NPP under the name ACM (Application Consignation Maintenance). (author)

  20. Migration: the trends converge.

    Science.gov (United States)

    1985-09-01

    Formerly, Australia, New Zealand, Canada, and the US have served as permanent destinations for immigrants, while Europe's migrants have moved to more northerly countries to work for a time and then returned home. From 1973-1975 Europe's recruitment of foreign workers virtually ended, although family reunion for those immigrants allowed in was encouraged. Problems resulting from this new settlement migration include low paying jobs for immigrant women, high unemployment, and inadequate education for immigrant children. Illegal migrants from Latin America and the Caribbean enter the US and Canada each year while illegal North African immigrants enter Italy, Spain, and Greece. North America, Australia, and Europe have all received political refugees from Asia and Latin America. Increasingly, these foreigners compete in the labor market rather than simply fill jobs the native workers do not want. All the receiving countries have similar policy priorities: 1) more effective ways for controlling and monitoring inflows and checking illegal immigration; 2) encouraging normal living patterns and accepting refugees; and 3) integrating permanent migrants into the host country. Europe's public immigration encouragement prior to the first oil shock, has left some countries with a labor force that is reluctant to return home. It is unlikely that Europe will welcome foreign labor again in this decade, since unemployment among young people and women is high and family reunion programs may still bring in many immigrants. Less immigration pattern change will probably occur in North America, Australia, and New Zealand since these countries' populations are still growing and wages are more flexible. Immigration, regulated by policy, and emigration, determined by market forces, now are working in the same direction and will likely reduce future migration flows.

  1. The migrating crane

    CERN Document Server

    CERN's new crane is constantly on the move back and forth from the Meyrin to Prévessin sites. The crane arrived on 16 June and has already performed many tasks on these sites. This telescopic mobile crane replaces the two existing cranes which are leaving for a well-earned retirement. The compact new crane handles routine tasks which usually involve lifting loads of between one and ten tonnes anywhere at CERN. That explains why it is never in one place for long. With its 30-metre telescopic arm, it can lift up to 30 tonnes at three metres. With its little on-board computer, it can assess masses and distances and the safety margins with respect to its nominal capacity. Here, the new 30-tonne crane and the older 160-tonne crane, acquired two years ago, are unloading a helium tank from a trailer. They are turning the tank before installing it beside Building 180, ATLAS's assembly hall.

  2. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  3. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.

    2017-12-13

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  4. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  5. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  6. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  7. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  8. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  9. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  10. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  11. Development of Hardware and Software for Automated Ultrasonic Testing

    International Nuclear Information System (INIS)

    Choi, Sung Nam; Lee, Hee Jong; Yang, Seung Ok

    2012-01-01

    Nondestructive testing (NDT) for the construction and operating of NPPs plays an important role in confirming the integrity of the NPPs. Especially, Automated ultrasonic testing (AUT) is one of the primary nondestructive examination methods for in-service inspection of the welding parts in major components in NPPs. AUT is a reliable nondestructive testing because the data of AUT are saved and reviewed with other examiners. Korea Hydro and Nuclear Power-Central Research Institute (KHNP-CRI) has developed an automated ultrasonic testing (AUT) system based on a high speed pulser-receiver. In combination with the designed software and hardware architecture, this new system permits user configurations for a wide range of user-specific applications through fully automated inspections using compact portable systems with up to eight channels. This paper gives an overview of hardware (H/W) and software (S/W) for the AUT system to inspect welds in NPPs

  12. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.; Naous, Rawan; Masmoudi, M.

    2017-01-01

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  13. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  14. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  15. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accele......In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA......-based accelerator. Results show that significant speed-up can be obtained by the proposed acceleration framework on system-on-chips where reconfigurable fabric is placed next to the CPUs. The speed-up is due to both the intrinsic acceleration in the application-specific processors, and to the increased parallelism....

  16. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  17. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  18. Design Tools for Reconfigurable Hardware in Orbit (RHinO)

    Science.gov (United States)

    French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian

    2004-01-01

    The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.

  19. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  20. BCI meeting 2005--workshop on technology: hardware and software.

    Science.gov (United States)

    Cincotti, Febo; Bianchi, Luigi; Birch, Gary; Guger, Christoph; Mellinger, Jürgen; Scherer, Reinhold; Schmidt, Robert N; Yáñez Suárez, Oscar; Schalk, Gerwin

    2006-06-01

    This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop to review and evaluate the current state of BCI-related hardware and software. Technical requirements and current technologies, standardization procedures and future trends are covered. The main conclusion was recognition of the need to focus technical requirements on the users' needs and the need for consistent standards in BCI research.

  1. Optimizing main-memory join on modern hardware

    OpenAIRE

    Boncz, Peter; Manegold, Stefan; Kersten, Martin

    2002-01-01

    textabstractIn the past decade, the exponential growth in commodity CPUs speed has far outpaced advances in memory latency. A second trend is that CPU performance advances are not only brought by increased clock rate, but also by increasing parallelism inside the CPU. Current database systems have not yet adapted to these trends, and show poor utilization of both CPU and memory resources on current hardware. In this article, we show how these resources can be optimized for large joins and tra...

  2. Parallel-Architecture Simulator Development Using Hardware Transactional Memory

    OpenAIRE

    Armejach Sanosa, Adrià

    2009-01-01

    To address the need for a simpler parallel programming model, Transactional Memory (TM) has been developed and promises good parallel performance with easy-to-write parallel code. Unlike lock-based approaches, with TM, programmers do not need to explicitly specify and manage the synchronization among threads. However, programmers simply mark code segments as transactions, and the TM system manages the concurrency control for them. TM can be implemented either in software (STM) or hardware (HT...

  3. S-1 project. Volume II. Hardware. 1979 annual report

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    This volume includes highlights of the design of the Mark IIA uniprocessor (SMI-2), and the SCALD II user's manual. SCALD (structured computer-aided logic design system) cuts the cost and time required to design logic by letting the logic designer express ideas as naturally as possible, and by eliminating as many errors as possible - through consistency checking, simulation, and timing verification - before the hardware is built. (GHT)

  4. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  5. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.; Radwan, Ahmed G.; Salama, Khaled N.

    2011-01-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  6. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  7. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    Energy Technology Data Exchange (ETDEWEB)

    Church, Jennifer A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wooddy, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslett, Bob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Torretto, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-15

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  8. Introduction to hardware for nuclear medicine data systems

    International Nuclear Information System (INIS)

    Erickson, J.J.

    1976-01-01

    Hardware included in a computer-based data system for nuclear medicine imaging studies is discussed. The report is written for the newcomer to computer collection and analysis. Emphasis is placed on the effect of the various portions of the system on the final application in the nuclear medicine clinic. While an attempt is made to familiarize the user with some of the terms he will encounter, no attempt is made to make him a computer expert. 1 figure, 2 tables

  9. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    International Nuclear Information System (INIS)

    Church, Jennifer A.; Kashgarian, Michaele; Wooddy, Todd; Haslett, Bob; Torretto, Phil

    2016-01-01

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  10. Treatment alternatives for non-fuel-bearing hardware

    International Nuclear Information System (INIS)

    Ross, W.A.; Clark, L.L.; Oma, K.H.

    1987-01-01

    This evaluation compared four alternatives for the treatment or processing of non-fuel bearing hardware (NFBH) to reduce its volume and prepare it for disposal. These treatment alternatives are: shredding; shredding and low pressure compaction; shredding and supercompaction; and melting. These alternatives are compared on the basis of system costs, waste form characteristics, and process considerations. The study recommends that melting and supercompaction alternatives be further considered and that additional testing be conducted for these two alternatives

  11. Peculiarities of hardware implementation of generalized cellular tetra automaton

    OpenAIRE

    Аноприенко, Александр Яковлевич; Федоров, Евгений Евгениевич; Иваница, Сергей Васильевич; Альрабаба, Хамза

    2015-01-01

    Cellular automata are widely used in many fields of knowledge for the study of variety of complex real processes: computer engineering and computer science, cryptography, mathematics, physics, chemistry, ecology, biology, medicine, epidemiology, geology, architecture, sociology, theory of neural networks. Thus, cellular automata (CA) and tetra automata are gaining relevance taking into account the hardware and software solutions.Also it is marked a trend towards an increase in the number of p...

  12. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  13. Challenged by migration: Europe's options

    NARCIS (Netherlands)

    Constant, Amelie F.; Zimmermann, Klaus F.

    2017-01-01

    This paper examines the migration and labour mobility in the European Union and elaborates on their importance for the existence of the EU. Against all measures of success, the current public debate seems to suggest that the political consensus that migration is beneficial is broken. This comes with

  14. Radionuclide migration studies in soil

    International Nuclear Information System (INIS)

    Marumo, J.T.

    1989-01-01

    In this work a brief description about retention and migration parameters of radionuclides in soil, including main methods to determine the distribution coefficient (K) are given. Some of several factors that can act on the migration are also mentioned. (author) [pt

  15. South-South Migration and Remittances

    OpenAIRE

    Ratha, Dilip; Shaw, William

    2007-01-01

    South-South Migration and Remittances reports on preliminary results from an ongoing effort to improve data on bilateral migration stocks. It sets out some working hypotheses on the determinants and socioeconomic implications of South-South migration. Contrary to popular perception that migration is mostly a South-North phenomenon, South-South migration is large. Available data from nation...

  16. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  17. Optimized design of embedded DSP system hardware supporting complex algorithms

    Science.gov (United States)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  18. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  19. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  20. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  1. Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Hardin, Andy

    2011-01-01

    Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.

  2. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  3. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  4. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  5. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  6. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  7. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  8. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  9. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju

    2006-01-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study

  10. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Ahyoung; Choi, Byoungju [Ewha University, Seoul (Korea, Republic of)

    2006-07-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study.

  11. Task demand, task management, and teamwork

    Energy Technology Data Exchange (ETDEWEB)

    Braarud, Per Oeivind; Brendryen, Haavar

    2001-03-15

    The current approach to mental workload assessment in process control was evaluated in 3 previous HAMMLAB studies, by analysing the relationship between workload related measures and performance. The results showed that subjective task complexity rating was related to team's control room performance, that mental effort (NASA-TLX) was weakly related to performance, and that overall activity level was unrelated to performance. The results support the argument that general cognitive measures, i.e., mental workload, are weakly related to performance in the process control domain. This implies that other workload concepts than general mental workload are needed for valid assessment of human reliability and for valid assessment of control room configurations. An assessment of task load in process control suggested that how effort is used to handle task demand is more important then the level of effort invested to solve the task. The report suggests two main workload related concepts with a potential as performance predictors in process control: task requirements, and the work style describing how effort is invested to solve the task. The task requirements are seen as composed of individual task demand and team demand. In a similar way work style are seen as composed of individual task management and teamwork style. A framework for the development of the concepts is suggested based on a literature review and experiences from HAMMLAB research. It is suggested that operational definitions of workload concepts should be based on observable control room behaviour, to assure a potential for developing performance-shaping factors. Finally an explorative analysis of teamwork measures and performance in one study indicated that teamwork concepts are related to performance. This lends support to the suggested development of team demand and teamwork style as elements of a framework for the analysis of workload in process control. (Author)

  12. Measuring International Migration in Azerbaijan

    Directory of Open Access Journals (Sweden)

    Serhat Yüksel

    2018-01-01

    Full Text Available International migration significantly affects economic, social, cultural, and political factors of the country. Owing to this situation, it can be said that the reasons of international migration should be analyzed in order to control this problem. The purpose of this study is to determine the influencing factors of international migration in Azerbaijan. In this scope, annual data of 11 explanatory variables for the period of 1995–2015 was analyzed via Multivariate Adaptive Regression Splines (MARS method. According to the results of this analysis, it was identified that people prefer to move other countries in case of high unemployment rates. In addition, the results of the study show that population growth and high mortality rate increases the migration level. While considering these results, it was recommended that Azerbaijan should focus on these aspects to control international migration problem.

  13. Wages, Welfare Benefits and Migration.

    Science.gov (United States)

    Kennan, John; Walker, James R

    2010-05-01

    Differences in economic opportunities give rise to strong migration incentives, across regions within countries, and across countries. In this paper we focus on responses to differences in welfare benefits across States. We apply the model developed in Kennan and Walker (2008), which emphasizes that migration decisions are often reversed, and that many alternative locations must be considered. We model individual decisions to migrate as a job search problem. A worker starts the life-cycle in some home location and must determine the optimal sequence of moves before settling down. The model is sparsely parameterized. We estimate the model using data from the National Longitudinal Survey of Youth (1979). Our main finding is that income differences do help explain the migration decisions of young welfare-eligible women, but large differences in benefit levels provide surprisingly weak migration incentives.

  14. Migrating and herniating hydatid cysts

    International Nuclear Information System (INIS)

    Koc, Zafer; Ezer, Ali

    2008-01-01

    Objective: To present the prevalence and imaging findings of patients with hydatid disease (HD) showing features of migration or herniation of the hydatid cysts (HCs) and underline the clinical significance of this condition. Materials and methods: Between May 2003 and June 2006, 212 patients with HD were diagnosed by abdomen and/or thorax CT, searched for migrating or herniating HC. Imaging findings of 7 patients (5 women, 2 men with an age range of 19-63 years; mean ± S.D., 44 ± 19 years) with HD showing transdiaphragmatic migration (6 subjects) or femoral herniation (1 subject) were evaluated. Diagnosis of all the patients were established by pathologic examination and migration or herniation was confirmed by surgery in all patients. Results: Liver HD were identified in 169 (79.7%) of 212 patients with HD. Transdiaphragmatic migration of HCs were identified in 6 (3.5%) of the 169 patients with liver HD. In one patient, femoral herniation of the retroperitoneal HC into the proximal anterior thigh was identified. All of these seven patients exhibiting migration or herniation of HCs had active HCs including 'daughter cysts'. Two patients had previous surgery because of liver HD and any supradiaphragmatic lesion was not noted before operation. Findings of migration or herniation were confirmed by surgery. Conclusion: Active HCs may show migration or herniation due to pressure difference between the anatomic cavities, and in some of the patients, by contribution of gravity. Previous surgery may be a complementary factor for migration as seen in two of our patients. The possibility of migration or herniation in patients with HD should be considered before surgery

  15. Current Migration Movements in Europe

    Directory of Open Access Journals (Sweden)

    Jelena Zlatković Winter

    2004-09-01

    Full Text Available After a brief historical review of migrations in Europe, the paper focuses on current migration trends and their consequences. At the end of the 1950s, Western Europe began to recruit labour from several Mediterranean countries – Italy, Spain, Portugal and former Yugoslavia, and later from Morocco, Algeria, Tunisia and Turkey. Some countries, such as France, Great Britain and the Netherlands, recruited also workers from their former colonies. In 1970 Germany had the highest absolute number of foreigners, followed by France, and then Switzerland and Belgium. The total number of immigrants in Western Europe was twelve million. During the 1970s mass recruitment of foreign workers was abandoned, and only the arrival of their family members was permitted, which led to family reunification in the countries of employment. Europe closed its borders, with the result that clandestine migration increased. The year 1989 was a turning point in the history of international migrations. The political changes in Central and Eastern Europe brought about mass migration to the West, which culminated in the so-called “mass movement of 1989–1990”. The arrival of ethnic Germans in Germany, migration inside and outside of the territory of the former Soviet Union, an increase in the number of asylum seekers and displaced persons, due to armed conflicts, are – according to the author – the main traits of current migration. The main part of the paper discusses the causes and effects of this mass wave, as well as trends in labour migration, which is still present. The second part of the paper, after presenting a typology of migrations, deals with the complex processes that brought about the formation of new communities and led to the phenomenon of new ethnic minorities and to corresponding migration policies in Western European countries that had to address these issues.

  16. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    OpenAIRE

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s mul...

  17. Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    KAUST Repository

    Sukkari, Dalal

    2016-10-25

    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.

  18. Project Tasks in Robotics

    DEFF Research Database (Denmark)

    Sørensen, Torben; Hansen, Poul Erik

    1998-01-01

    Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics......Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics...

  19. Task assignment and coaching

    OpenAIRE

    Dominguez-Martinez, S.

    2009-01-01

    An important task of a manager is to motivate her subordinates. One way in which a manager can give incentives to junior employees is through the assignment of tasks. How a manager allocates tasks in an organization, provides information to the junior employees about his ability. Without coaching from a manager, the junior employee only has information about his past performance. Based on his past performance, a talented junior who has performed a difficult task sometimes decides to leave the...

  20. Functional Task Test (FTT)

    Science.gov (United States)

    Bloomberg, Jacob J.; Mulavara, Ajitkumar; Peters, Brian T.; Rescheke, Millard F.; Wood, Scott; Lawrence, Emily; Koffman, Igor; Ploutz-Snyder, Lori; Spiering, Barry A.; Feeback, Daniel L.; hide

    2009-01-01

    This slide presentation reviews the Functional Task Test (FTT), an interdisciplinary testing regimen that has been developed to evaluate astronaut postflight functional performance and related physiological changes. The objectives of the project are: (1) to develop a set of functional tasks that represent critical mission tasks for the Constellation Program, (2) determine the ability to perform these tasks after space flight, (3) Identify the key physiological factors that contribute to functional decrements and (4) Use this information to develop targeted countermeasures.

  1. Task assignment and coaching

    NARCIS (Netherlands)

    Dominguez-Martinez, S.

    2009-01-01

    An important task of a manager is to motivate her subordinates. One way in which a manager can give incentives to junior employees is through the assignment of tasks. How a manager allocates tasks in an organization, provides information to the junior employees about his ability. Without coaching

  2. Gas migration through cement slurries analysis: A comparative laboratory study

    Directory of Open Access Journals (Sweden)

    Arian Velayati

    2015-12-01

    Full Text Available Cementing is an essential part of every drilling operation. Protection of the wellbore from formation fluid invasion is one of the primary tasks of a cement job. Failure in this task results in catastrophic events, such as blow outs. Hence, in order to save the well and avoid risky and operationally difficult remedial cementing, slurry must be optimized to be resistant against gas migration phenomenon. In this paper, performances of the conventional slurries facing gas invasion were reviewed and compared with modified slurry containing special gas migration additive by using fluid migration analyzer device. The results of this study reveal the importance of proper additive utilization in slurry formulations. The rate of gas flow through the slurry in neat cement is very high; by using different types of additives, we observe obvious changes in the performance of the cement system. The rate of gas flow in neat class H cement was reported as 36000 ml/hr while the optimized cement formulation with anti-gas migration and thixotropic agents showed a gas flow rate of 13.8 ml/hr.

  3. Automated Tracking of Cell Migration with Rapid Data Analysis.

    Science.gov (United States)

    DuChez, Brian J

    2017-09-01

    Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  4. Virtual reality hardware and graphic display options for brain-machine interfaces.

    Science.gov (United States)

    Marathe, Amar R; Carey, Holle L; Taylor, Dawn M

    2008-01-15

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

  5. RTOS acceleration in an MPSoC with reconfigurable hardware

    NARCIS (Netherlands)

    Zaykov, P.G.; Kuzmanov, G.; Molnos, A.M.; Goossens, K.G.W.

    2016-01-01

    In this paper, we address the problem of improving the performance of real-time embedded Multiprocessor System-on-Chip (MPSoC). Such MPSoCs often execute applications composed of multiple tasks. The tasks on each processor are scheduled by a Real-Time Operating System (RTOS) instance. To improve

  6. Geochemistry and radionuclide migration

    International Nuclear Information System (INIS)

    Isherwood, D.

    1978-01-01

    Theoretically, the geochemical barrier can provide a major line of defense in protecting the biosphere from the hazards of nuclear waste. The most likely processes involved are easily identified. Preliminary investigations using computer modeling techniques suggest that retardation is an effective control on radionuclide concentrations. Ion exchange reactions slow radionuclide migration and allow more time for radioactive decay and dispersion. For some radionuclides, solubility alone may limit concentrations to less than the maximum permissible now considered acceptable by the Federal Government. The effectiveness of the geochemical barrier is ultimately related to the repository site characteristics. Theory alone tells us that geochemical controls will be most efficient in an environment that provides for maximum ion exchange and the precipitation of insoluble compounds. In site selection, consideration should be given to rock barriers with high ion exchange capacity that might also act as semi-permeable membranes. Also important in evaluating the site's potential for effective geochemical controls are the oxidation potentials, pH and salinity of the groundwater

  7. Radionuclide migration in soils

    Energy Technology Data Exchange (ETDEWEB)

    Demir, M [Ingenieurgesellschaft Bonnenberg und Drescher, Juelich (Germany, F.R.)

    1979-01-01

    Unplanned releases from a nuclear installation - e.g., leakage from a storage tank or other incident - can result in the escape of contaminants such as U, Pu, Cs, Sr, T etc. Nuclide transport through the ground is governed by characteristics of the subsurface hydrology and the specific nuclides under consideration. Unsaturated soil layers result in a transport rate so low as to negligible. Radionuclides reaching the ground water are assumed to endanger human life because of potential uncontrolled ingestion. The most dangerous nuclides are long-lived and not absorbed, or very poorly absorbed, in the soil. During migration of nuclides through saturated soil layers, the concentration can be reduced by dilution. Preliminary results indicate that tritium is spread with ground water velocity. Its concentration can be reduced only by diffusion, dispersion and radioactive decay. Alpha-emitters are strongly retained velocities of alpha-emitters are approximately one thousandth (10/sup -3/) that of T. Transport velocities of Cs and Sr are approximately one hundreth (10/sup -2/) and one tenth (10/sup -1/) that of T respectively.

  8. Radionuclide migration in soils

    International Nuclear Information System (INIS)

    Demir, M.

    1979-01-01

    Unplanned releases from a nuclear installation - e.g., leakage from a storage tank or other incident - can result in the escape of contaminants such as U, Pu, Cs, Sr, T etc. Nuclide transport through the ground is governed by characteristics of the subsurface hydrology and the specific nuclides under consideration. Unsaturated soil layers result in a transport rate so low as to negligible. Radionuclides reaching the ground water are assumed to endanger human life because of potential uncontrolled ingestion. The most dangerous nuclides are long-lived and not absorbed, or very poorly absorbed, in the soil. During migration of nuclides through saturated soil layers, the concentration can be reduced by dilution. Preliminary results indicate that tritium is spread with ground water velocity. Its concentration can be reduced only by diffusion, dispersion and radioactive decay. Alpha-emitters are strongly retained velocities of alpha-emitters are approximately one thousandth (10 -3 ) that of T. Transport velocities of Cs and Sr are approximately one hundreth (10 -2 ) and one tenth (10 -1 ) that of T respectively. (orig./HP) [de

  9. Psychosocial Aspects of Migration

    Directory of Open Access Journals (Sweden)

    Ayla Tuzcu

    2014-02-01

    Full Text Available The incident of migration that occurs as a result of the mobility of individuals between various regions and is considered a social change process brings along various factors. Among these factors, the most important one is the culture of the new society where the immigrant begins to live and the process of adaptation with this culture. Individuals from different cultures are required to live together, cope with differences and overcome the difficulties. The process of adaptation to the new lifestyle might cause the individual to have some feelings such as loneliness, socially isolation, being alienated, being regretful and self-depreciation, and consequently experience a greater stress. Being unable to cope with stress efficiently creates risks in individuals in terms of health problems such as anxiety and depression. Healthcare professionals are required to evaluate life styles, difficulties and coping levels of immigrants in order to protect and develop their mental health. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2014; 6(1.000: 56-66

  10. MIGRATION IMPACT ON ECONOMICAL SITUATION

    Directory of Open Access Journals (Sweden)

    Virginia COJOCARU

    2016-01-01

    Full Text Available This paper presents recent trends and flows of labor migration and its impact on economic and social life. Main aim of this research sets up the influence of the migration on the European economics and its competitiveness. Methods of research are: method of comparison, analysis method, method of deduction, method of statistics, modeling method. The economic impact of migration has been intensively studied but is still often driven by ill-informed perceptions, which, in turn, can lead to public antagonism towards migration. These negative views risk jeopardising efforts to adapt migration policies to the new economic and demographic challenges facing many countries. Migration Policy looks at the evidence for how immigrants affect the economy in three main areas: The labour market, public purse and economic growth. In Europe, the scope of labour mobility greatly increased within the EU/EFTA zones following the EU enlargements of 2004, 2007 and 2014-2015. This added to labour markets’ adjustment capacity. Recent estimates suggest that as much as a quarter of the asymmetric labour market shock – that is occurring at different times and with different intensities across countries – may have been absorbed by migration within a year.

  11. Vulnerable to HIV / AIDS. Migration.

    Science.gov (United States)

    Fernandez, I

    1998-01-01

    This special report discusses the impact of globalization, patterns of migration in Southeast Asia, gender issues in migration, the links between migration and HIV/AIDS, and spatial mobility and social networks. Migrants are particularly marginalized in countries that blame migrants for transmission of infectious and communicable diseases and other social ills. Effective control of HIV/AIDS among migrant and native populations requires a multisectoral approach. Programs should critically review the privatization of health care services and challenge economic models that polarize the rich and the poor, men and women, North and South, and migrant and native. Programs should recognize the equality between locals and migrants in receipt of health services. Countermeasures should have input from migrants in order to reduce the conditions that increase vulnerability to HIV/AIDS. Gender-oriented research is needed to understand women's role in migration. Rapid assessment has obscured the human dimension of migrants' vulnerability to HIV. Condom promotion is not enough. Migration is a major consequence of globalization, which holds the promise, real or imagined, of prosperity for all. Mass migration can be fueled by explosive regional developments. In Southeast Asia, migration has been part of the process of economic development. The potential to emigrate increases with greater per capita income. "Tiger" economies have been labor importers. Safe sex is not practiced in many Asian countries because risk is not taken seriously. Migrants tend to be used as economic tools, without consideration of social adjustment and sex behavior among singles.

  12. Income Inequality and Migration in Vietnam

    OpenAIRE

    NGUYEN, Tien Dung

    2012-01-01

    In this paper, we have analyzed the recent trends in income inequality, internal and international migrations and investigated the impact of migration on income distribution in Vietnam. Our analysis shows that the effects of migration on income inequality vary with different types of migration, depending on who migrate and where they migrate. Foreign remittances tend to flow toward more affluent households, and they increase income inequality. By contrast, domestic remittances accrue more to ...

  13. [The productive structure and migration].

    Science.gov (United States)

    Fernandez, M

    1980-01-01

    The author discusses the possibility of determining the proper approach to the study of migration, with a focus on the importance of global, structural, and historical analysis of the phenomenon. A general theoretical outline is presented that tends to show migration as an integral part of the process of social change. The sociological focus on modernization as a theoretical guide influencing the study of migration in Latin America is evaluated. The concept of overpopulation is explained in relation to the migratory process, with reference to capitalist and non-capitalist forms of production.

  14. Migration of accreting giant planets

    Science.gov (United States)

    Robert, C.; Crida, A.; Lega, E.; Méheut, H.

    2017-09-01

    Giant planets forming in protoplanetary disks migrate relative to their host star. By repelling the gas in their vicinity, they form gaps in the disk's structure. If they are effectively locked in their gap, it follows that their migration rate is governed by the accretion of the disk itself onto the star, in a so-called type II fashion. Recent results showed however that a locking mechanism was still lacking, and was required to understand how giant planets may survive their disk. We propose that planetary accretion may play this part, and help reach this slow migration regime.

  15. MIGRATION – EFFECTS AND SOLUTIONS

    Directory of Open Access Journals (Sweden)

    Raluca Cruceru

    2012-12-01

    Full Text Available There are three main flows that influence workforce performance—worker migration, the dissemination of knowledge, and overseas development assistance. For the present paper we decided to deal with the analyses of these three, yet mainly migration. We considered it to be one of the most important phenomenon existent on the market at this hour and with the highest negative impact on the economic and social situation. We presented a case study regarding the situation of migration in Romania and the main candidates to Romanian intelligence imports, the main issues and possible solutions to the problems encountered.

  16. Migration of ATLAS PanDA to CERN

    International Nuclear Information System (INIS)

    Stewart, Graeme Andrew; Klimentov, Alexei; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; De Castro Faria Salgado, Pedro Emanuel; Wenaus, Torre; Koblitz, Birger; Lamanna, Massimo

    2010-01-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  17. Intersection points for the driving of applier processes of the hardware control of the ZEUS forward detector

    International Nuclear Information System (INIS)

    Siemon, T.

    1992-08-01

    The ZEUS forward detector is built of drift- and transition-radiation chambers which are supported by many peripheral devices. The resulting complex system has to be monitored and controlled continously to preserve safety and to achieve optimal performance. For this task a Hardware-Control-System (HWC) has been developed. Ten VME and OS9-based microprocessors which are connected by Ethernet and VME-bus are provided to run the control- and monitoring tasks. Special attention has been paid to the development of efficient user-interfaces: RDT, an object-oriented database-toolkit, serves as an interface to the data of the HWC. The concept and the usage of this interface are outlined. Finally special features that may be useful for other applications are discussed. (orig.) [de

  18. Radioisotope thermoelectric generator licensed hardware package and certification tests

    International Nuclear Information System (INIS)

    Goldmann, L.H.; Averette, H.S.

    1994-01-01

    This paper presents the Licensed Hardware package and the Certification Test portions of the Radioisotope Thermoelectric Generator Transportation System. This package has been designed to meet those portions of the Code of Federal Regulations (10 CFR 71) relating to ''Type B'' shipments of radioactive materials. The detailed information for the anticipated license is presented in the safety analysis report for packaging, which is now in process and undergoing necessary reviews. As part of the licensing process, a full-size Certification Test Article unit, which has modifications slightly different than the Licensed Hardware or production shipping units, is used for testing. Dimensional checks of the Certification Test Article were made at the manufacturing facility. Leak testing and drop testing were done at the 300 Area of the US Department of Energy's Hanford Site near Richland, Washington. The hardware includes independent double containments to prevent the environmental spread of 238 Pu, impact limiting devices to protect portions of the package from impacts, and thermal insulation to protect the seal areas from excess heat during accident conditions. The package also features electronic feed-throughs to monitor the Radioisotope Thermoelectric Generator's temperature inside the containment during the shipment cycle. This package is designed to safely dissipate the typical 4500 thermal watts produced in the largest Radioisotope Thermoelectric Generators. The package also contains provisions to ensure leak tightness when radioactive materials, such as a Radioisotope Thermoelectric Generator for the Cassini Mission, planned for 1997 by the National Aeronautics and Space Administration, are being prepared for shipment. These provisions include test ports used in conjunction with helium mass spectrometers to determine seal leakage rates of each containment during the assembly process

  19. Multi-User Hardware Solutions to Combustion Science ISS Research

    Science.gov (United States)

    Otero, Angel M.

    2001-01-01

    In response to the budget environment and to expand on the International Space Station (ISS) Fluids and Combustion Facility (FCF) Combustion Integrated Rack (CIR), common hardware approach, the NASA Combustion Science Program shifted focus in 1999 from single investigator PI (Principal Investigator)-specific hardware to multi-user 'Minifacilities'. These mini-facilities would take the CIR common hardware philosophy to the next level. The approach that was developed re-arranged all the investigations in the program into sub-fields of research. Then common requirements within these subfields were used to develop a common system that would then be complemented by a few PI-specific components. The sub-fields of research selected were droplet combustion, solids and fire safety, and gaseous fuels. From these research areas three mini-facilities have sprung: the Multi-user Droplet Combustion Apparatus (MDCA) for droplet research, Flow Enclosure for Novel Investigations in Combustion of Solids (FEANICS) for solids and fire safety, and the Multi-user Gaseous Fuels Apparatus (MGFA) for gaseous fuels. These mini-facilities will develop common Chamber Insert Assemblies (CIA) and diagnostics for the respective investigators complementing the capability provided by CIR. Presently there are four investigators for MDCA, six for FEANICS, and four for MGFA. The goal of these multi-user facilities is to drive the cost per PI down after the initial development investment is made. Each of these mini-facilities will become a fixture of future Combustion Science NASA Research Announcements (NRAs), enabling investigators to propose against an existing capability. Additionally, an investigation is provided the opportunity to enhance the existing capability to bridge the gap between the capability and their specific science requirements. This multi-user development approach will enable the Combustion Science Program to drive cost per investigation down while drastically reducing the time

  20. Using Innovative Technologies for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, E. M.; Eddleman, D. E.; Reynolds, D. C.; Hardin, N. A.

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As the United States enters into the next space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, rapid manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on NASA s Space Launch System (SLS) upper stage engine, J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator (GG) discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using a workhorse gas generator (WHGG) test fixture at MSFC's East Test Area, the duct was subjected to extreme J-2X hot gas environments during 7 tests for a total of 537 seconds of hot-fire time. The duct underwent extensive post-test evaluation and showed no signs of degradation. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  1. Reconfigurable ATCA hardware for plasma control and data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, B.B., E-mail: bernardo@ipfn.ist.utl.p [Associacao EURATOM/IST Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Batista, A.J.N.; Correia, M.; Neto, A.; Fernandes, H.; Goncalves, B.; Sousa, J. [Associacao EURATOM/IST Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2010-07-15

    The IST/EURATOM Association is developing a new generation of control and data acquisition hardware for fusion experiments based on the ATCA architecture. This emerging open standard offers a significantly higher data throughput over a reliable High Availability (HA) mechanical and electrical platform. One of this ATCA boards has 32 galvanically isolated ADC channels (18 bit) each mounted on a swappable plug-in card, 8 DAC channels (16 bit), 8 digital I/O channels and embeds a high performance XILINX Virtex 4 family field programmable gate array (FPGA). The specific modular and configurable hardware design enables adaptable utilization of the board in dissimilar applications. The first configuration, specially developed for tokamak plasma Vertical Stabilization, consists of a Multiple-Input-Multiple-Output (MIMO) controller that is capable of feedback loops faster than 1 ms using a multitude of input signals fed from different boards communicating through the Aurora{sup TM} point-to-point protocol. Massive parallel algorithms can be implemented on the FPGA either with programmed digital logic, using a HDL hardware description language, or within its internal silicon PowerPC{sup TM} running a full fledged real-time operating system. The second board configuration is dedicated for transient recording of the entire 32 channels at 2 MSamples/s to the on-board 512 MB DDR2 memory. Signal data retrieval is accelerated by a DMA-driven PCI Express{sup TM} x1 Interface to the ATCA system controller, providing an overall throughput in excess of 100 MB/s. This paper illustrates these developments and discusses possible configurations for foreseen applications.

  2. Using Innovative Techniques for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  3. Proof-Carrying Hardware: Concept and Prototype Tool Flow for Online Verification

    OpenAIRE

    Drzevitzky, Stephanie; Kastens, Uwe; Platzner, Marco

    2010-01-01

    Dynamically reconfigurable hardware combines hardware performance with software-like flexibility and finds increasing use in networked systems. The capability to load hardware modules at runtime provides these systems with an unparalleled degree of adaptivity but at the same time poses new challenges for security and safety. In this paper, we elaborate on the presentation of proof carrying hardware (PCH) as a novel approach to reconfigurable system security. PCH takes ...

  4. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  5. Hardware support for software controlled fast reconfiguration of performance counters

    Science.gov (United States)

    Salapura, Valentina; Wisniewski, Robert W.

    2013-06-18

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  6. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  7. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  8. Graph based communication analysis for hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1999-01-01

    In this paper we present a coarse grain CDFG (Control/Data Flow Graph) model suitable for hardware/software partitioning of single processes and demonstrate how it is necessary to perform various transformations on the graph structure before partitioning in order to achieve a structure that allows...... for accurate estimation of communication overhead between nodes mapped to different processors. In particular, we demonstrate how various transformations of control structures can lead to a more accurate communication analysis and more efficient implementations. The purpose of the transformations is to obtain...

  9. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  10. Integración continua para open hardware

    OpenAIRE

    Peral Chico, David del

    2012-01-01

    En estos últimos años, la informática, y más concretamente, el hardware, está evolucionando hacia sistemas empotrados. La aparición de nuevos mercados como los micrordenadores, televisiones inteligentes, etc., y masificación de los existentes como los teléfonos inteligentes y las tablets amplifica este fenómeno. Esto es debido a las ventajas de dichos sistemas en cuanto a coste a escala, optimización y rendimiento, consumo de energía o tamaño, entre otras. Los sistemas empotrados crecen en so...

  11. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  12. Crear dispositivo para personas sordas (plataforma hardware Arduino)

    OpenAIRE

    Codina Barberà, Marc

    2013-01-01

    El trabajo expuesto en la presente memoria tiene como objetivo la creación de un prototipo de avisos para gente sorda. El sistema se encargará de facilitar la interactuación entre una persona con problemas auditivos y los señales sonoros que pueden hallarse en una casa. El prototipo se ha desarrollado a partir de la plataforma hardware Arduino, un Smartphone con sistema operativo Android y la tecnología de comunicaciones inalámbricas Bluetooth y ZigBee. El treball exposat en aquesta memòri...

  13. Benchmarking and Hardware-In-The-Loop Operation of a ...

    Science.gov (United States)

    Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge technologies, advanced high-efficiency transmission, reduced mass, and reduced roadload. Predict future vehicle performance with Atkinson engine. As part of its technology assessment for the upcoming midterm evaluation of the 2017-2025 LD vehicle GHG emissions regulation, EPA has been benchmarking engines and transmissions to generate inputs for use in its ALPHA model

  14. Technology Corner: Dating of Electronic Hardware for Prior Art Investigations

    Directory of Open Access Journals (Sweden)

    Sellam Ismail

    2012-03-01

    Full Text Available In many legal matters, specifically patent litigation, determining and authenticating the date of computer hardware or other electronic products or components is often key to establishing the item as legitimate evidence of prior art. Such evidence can be used to buttress claims of technologies available or of events transpiring by or at a particular date.In 1945, the Electronics Industry Association published a standard, EIA 476-A, standardized in the reference Source and Date Code Marking (Electronic Industries Association, 1988.(see PDF for full tech corner

  15. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  16. Surface moisture measurement system hardware acceptance test procedure

    International Nuclear Information System (INIS)

    Ritter, G.A.

    1996-01-01

    The purpose of this acceptance test procedure is to verify that the mechanical and electrical features of the Surface Moisture Measurement System are operating as designed and that the unit is ready for field service. This procedure will be used in conjunction with a software acceptance test procedure, which addresses testing of software and electrical features not addressed in this document. Hardware testing will be performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. These systems were developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks

  17. Deployment Testing of the De-Orbit Sail Flight Hardware

    OpenAIRE

    Hillebrandt, Martin; Meyer, Sebastian; Zander, Martin; Hühne, Christian

    2015-01-01

    The paper describes the results of the deployment testing of the De-Orbit Sail flight hardware, a drag sail for de-orbiting applications, performed by DLR. It addresses in particular the deployment tests of the fullscale sail subsystem and deployment force tests performed on the boom deployment module. For the fullscale sail testing a gravity compensation device is used which is described in detail. It allows observations of the in-plane interaction of the booms with the sail membrane and the...

  18. Hardware Prototyping of Neural Network based Fetal Electrocardiogram Extraction

    Science.gov (United States)

    Hasan, M. A.; Reaz, M. B. I.

    2012-01-01

    The aim of this paper is to model the algorithm for Fetal ECG (FECG) extraction from composite abdominal ECG (AECG) using VHDL (Very High Speed Integrated Circuit Hardware Description Language) for FPGA (Field Programmable Gate Array) implementation. Artificial Neural Network that provides efficient and effective ways of separating FECG signal from composite AECG signal has been designed. The proposed method gives an accuracy of 93.7% for R-peak detection in FHR monitoring. The designed VHDL model is synthesized and fitted into Altera's Stratix II EP2S15F484C3 using the Quartus II version 8.0 Web Edition for FPGA implementation.

  19. Online Infrastructure in Supply Chain for Hardware Shops

    OpenAIRE

    Sørensen , Karl ,

    2014-01-01

    Part 4: Private Services; International audience; This article describes how the Scandinavian network communication system DATEX was used to build an online infrastructure in a retail chain of privately owned hardware shops and Do-It-Yourself (DIY) centers. The solution gave the staff in the shops the possibility to use EDP as early as in 1983. The Internet did not exist at the time. EDP was not part of the daily work in the shop and was for most employees something unknown that took place at...

  20. System for processing an encrypted instruction stream in hardware

    Science.gov (United States)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    2016-04-12

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  1. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  2. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  3. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1996-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. We must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenets relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases; all six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. (author)

  4. Hardware Architectures for the Correspondence Problem in Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Thomas Eide

    Method"has been developed in conjunction with the work on this thesis and has not previously been described. Also, during this project a combined image acquisition and compression board has been developed for a NASA sounding rocket. This circuit, a so-called Lightning Imager, is also described. Finally...... an optimized hardware architecture has been proposed in relation to the three matching methods mentioned above. Because of the cost required to physically implement and test the developed architecture, it has been decided todocument the performance of the architecture through theoretical proofs only....

  5. MRI of neuronal migration disorders

    International Nuclear Information System (INIS)

    Engelbrecht, V.

    1996-01-01

    Twenty-one MRI examinations of the brain were performed in 19 children with neuronal migration disorders. Multiplanar oriented spin-echo sequences were on a scanner with 1.5 T. In 8 children we performed an additional turbo-inversion recovery (TIR) sequence. Results of sonography or CT from five children were compared with MRI scans. Using the actual nomenclature, we found the following migration disorders: Lissencephaly (n=6), cobblestone lissencephaly with Walker-Warbung syndrome (WWS) (n=2), polymicrogyria and schizencephaly (n=2), focal heterotopia (n=5), diffuse heterotopie (n=2) and hemimegalencephaly (n=2). MRI was superior to CT and sonography in all children. Except for the two boys with WWS, the TIR sequence was the best to demonstrate the changes in migration disorder because of the high contrast between gray and white matter. We demonstrate the characteristic features of the different migration disorders and compare them with the existing literature. (orig.) [de

  6. The migration challenge for PAYG

    Czech Academy of Sciences Publication Activity Database

    Aslanyan, Gurgen

    2014-01-01

    Roč. 27, č. 4 (2014), s. 1023-1038 ISSN 0933-1433 Institutional support: RVO:67985998 Keywords : public pension s * PAYG * unskilled migration Subject RIV: AH - Economics Impact factor: 1.109, year: 2014

  7. The migration challenge for PAYG

    Czech Academy of Sciences Publication Activity Database

    Aslanyan, Gurgen

    2014-01-01

    Roč. 27, č. 4 (2014), s. 1023-1038 ISSN 0933-1433 Institutional support: PRVOUK-P23 Keywords : public pension s * PAYG * unskilled migration Subject RIV: AH - Economics Impact factor: 1.109, year: 2014

  8. Labour market frictions and migration

    NARCIS (Netherlands)

    Cremers, Jan

    2016-01-01

    The 4th contribution to the series INT-AR papers is dedicated to the methods of assessing labour market frictions. The paper provides a (brief) international comparison of the role of labour migration in solving these frictions.

  9. Fluid migration studies in salt

    International Nuclear Information System (INIS)

    Shefelbine, H.C.; Raines, G.E.

    1980-01-01

    This discussion will be limited to the migration of water trapped in the rock salt under the influence of the heat field produced by nuclear waste. This is of concern because hypotheticl scenarios have been advanced in which this fluid movement allows radionuclides to escape to the biosphere. While portions of these scenarios are supported by observation, none of the complete scenarios has been demonstrated. The objectives of the present fluid migration studies are two-fold: 1. determine the character of the trapped fluid in terms of quantity, habitat and chemical constituents; and 2. define the mechanisms that cause the fluid to migrate toward heat sources. Based on the observations to date, fluid migration will not have a major impact on repository integrity. However, the above objectives will be pursued until the impacts, if any, can be quantified

  10. Automated personnel data base system specifications, Task V. Final report

    International Nuclear Information System (INIS)

    Bartley, H.J.; Bocast, A.K.; Deppner, F.O.; Harrison, O.J.; Kraas, I.W.

    1978-11-01

    The full title of this study is 'Development of Qualification Requirements, Training Programs, Career Plans, and Methodologies for Effective Management and Training of Inspection and Enforcement Personnel.' Task V required the development of an automated personnel data base system for NRC/IE. This system is identified as the NRC/IE Personnel, Assignment, Qualifications, and Training System (PAQTS). This Task V report provides the documentation for PAQTS including the Functional Requirements Document (FRD), the Data Requirements Document (DRD), the Hardware and Software Capabilities Assessment, and the Detailed Implementation Schedule. Specific recommendations to facilitate implementation of PAQTS are also included

  11. Real-time multi-task operators support system

    International Nuclear Information System (INIS)

    Wang He; Peng Minjun; Wang Hao; Cheng Shouyu

    2005-01-01

    The development in computer software and hardware technology and information processing as well as the accumulation in the design and feedback from Nuclear Power Plant (NPP) operation created a good opportunity to develop an integrated Operator Support System. The Real-time Multi-task Operator Support System (RMOSS) has been built to support the operator's decision making process during normal and abnormal operations. RMOSS consists of five system subtasks such as Data Collection and Validation Task (DCVT), Operation Monitoring Task (OMT), Fault Diagnostic Task (FDT), Operation Guideline Task (OGT) and Human Machine Interface Task (HMIT). RMOSS uses rule-based expert system and Artificial Neural Network (ANN). The rule-based expert system is used to identify the predefined events in static conditions and track the operation guideline through data processing. In dynamic status, Back-Propagation Neural Network is adopted for fault diagnosis, which is trained with the Genetic Algorithm. Embedded real-time operation system VxWorks and its integrated environment Tornado II are used as the RMOSS software cross-development. VxGUI is used to design HMI. All of the task programs are designed in C language. The task tests and function evaluation of RMOSS have been done in one real-time full scope simulator. Evaluation results show that each task of RMOSS is capable of accomplishing its functions. (authors)

  12. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Peter August [Northwestern Univ., Evanston, IL (United States)

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems software for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  13. Return Migration and Working Choices

    OpenAIRE

    TANI, Massimiliano; MAHUTEAU, Stéphane

    2008-01-01

    Collective Action to Support the Reintegration of Return Migrants in their Country of Origin (MIREM) This paper uses the recent survey carried out in the framework of the MIREM project on returnees to Algeria, Morocco and Tunisia and studies the duration of emigration and the labour force status upon returning. The results suggest that age and the year of emigration play a central role in the migration decision, but they do not support the hypothesis that the duration of migration is deter...

  14. Roosts and migrations of swallows

    OpenAIRE

    Winkler, David W.

    2006-01-01

    Swallows of the north temperate zone display a wide variety of territorial behaviour during the breeding season, but as soon as breeding is over, they all appear to adopt a pattern of independent diurnal foraging interleaved with aggregation every night in dense roosts. Swallows generally migrate during the day, feeding on the wing. On many stretches of their annual journeys, their migrations can thus be seen as the simple spatial translation of nocturnal roost sites with foraging routes stra...

  15. International Student Migration to Germany

    OpenAIRE

    Donata Bessey

    2007-01-01

    This paper presents first empirical evidence on international student migration to Germany. I use a novel approach that analyzes student mobility using an augmented gravity equation and find evidence of strong network effects and of the importance of distance - results familiar from the empirical migration literature. However, the importance of disposable income in the home country does not seem to be too big for students, while the fact of being a politically unfree country decreases migrati...

  16. Palaearctic-African Bird Migration

    DEFF Research Database (Denmark)

    Iwajomo, Soladoye Babatola

    Bird migration has attracted a lot of interests over past centuries and the methods used for studying this phenomenon has greatly improved in terms of availability, dimension, scale and precision. In spite of the advancements, relatively more is known about the spring migration of trans-Saharan m......Bird migration has attracted a lot of interests over past centuries and the methods used for studying this phenomenon has greatly improved in terms of availability, dimension, scale and precision. In spite of the advancements, relatively more is known about the spring migration of trans...... of birds from Europe to Africa and opens up the possibility of studying intra-African migration. I have used long-term, standardized autumn ringing data from southeast Sweden to investigate patterns in biometrics, phenology and population trends as inferred from annual trapping totals. In addition, I...... in the population of the species. The papers show that adult and juvenile birds can use different migration strategies depending on time of season and prevailing conditions. Also, the fuel loads of some individuals were theoretically sufficient for a direct flight to important goal area, but whether they do so...

  17. Investigation on nuclide migration behaviors

    International Nuclear Information System (INIS)

    Baik, Minhoon; Park, Chungkyun; Kim, Seungsoo

    2012-04-01

    In this study, we investigated the properties of geochemical reactions and sorption of high-level radionuclides and highly-mobile radionuclides in deep geological disposal environments. We also analyzed the dissolution properties of pyro wastes and constructed databases for the geochemical reactions and sorption for the safety assessment of HLW disposal. Technologies for measuring diffusion depths of radionuclides through fracture surfaces and rock matrix were developed in KURT conditions and their diffusion properties were analyzed and evaluated. The combined reactions of radionuclide/mineral/microbe in deep disposal environments were investigated and the effects of microbe on the radionuclide migration and disposal system behaviors were evaluated. In-situ solute migration system and on-line monitoring system were installed in KURT and the migration and retardation behaviors of various solutes and their interaction with fracture-filling materials were investigated. Basic properties of KURT groundwater colloids were analyzed using various methods. In addition, in-situ colloid migration experiments through a rock fracture were carried out and the developed migration model was verified. We have participated in Colloid Formation and Migration (CFM) international joint project in GTS and obtained reliability for our research results by comparing research results each other

  18. A hardware overview of the RHIC LLRF platform

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.

    2011-01-01

    The RHIC Low Level RF (LLRF) platform is a flexible, modular system designed around a carrier board with six XMC daughter sites. The carrier board features a Xilinx FPGA with an embedded, hard core Power PC that is remotely reconfigurable. It serves as a front end computer (FEC) that interfaces with the RHIC control system. The carrier provides high speed serial data paths to each daughter site and between daughter sites as well as four generic external fiber optic links. It also distributes low noise clocks and serial data links to all daughter sites and monitors temperature, voltage and current. To date, two XMC cards have been designed: a four channel high speed ADC and a four channel high speed DAC. The new LLRF hardware was used to replace the old RHIC LLRF system for the 2009 run. For the 2010 run, the RHIC RF system operation was dramatically changed with the introduction of accelerating both beams in a new, common cavity instead of each ring having independent cavities. The flexibility of the new system was beneficial in allowing the low level system to be adapted to support this new configuration. This hardware was also used in 2009 to provide LLRF for the newly commissioned Electron Beam Ion Source.

  19. Health Maintenance System (HMS) Hardware Research, Design, and Collaboration

    Science.gov (United States)

    Gonzalez, Stefanie M.

    2010-01-01

    The Space Life Sciences division (SLSD) concentrates on optimizing a crew member's health. Developments are translated into innovative engineering solutions, research growth, and community awareness. This internship incorporates all those areas by targeting various projects. The main project focuses on integrating clinical and biomedical engineering principles to design, develop, and test new medical kits scheduled for launch in the Spring of 2011. Additionally, items will be tagged with Radio Frequency Interference Devices (RFID) to keep track of the inventory. The tags will then be tested to optimize Radio Frequency feed and feed placement. Research growth will occur with ground based experiments designed to measure calcium encrusted deposits in the International Space Station (ISS). The tests will assess the urine calcium levels with Portable Clinical Blood Analyzer (PCBA) technology. If effective then a model for urine calcium will be developed and expanded to microgravity environments. To support collaboration amongst the subdivisions of SLSD the architecture of the Crew Healthcare Systems (CHeCS) SharePoint site has been redesigned for maximum efficiency. Community collaboration has also been established with the University of Southern California, Dept. of Aeronautical Engineering and the Food and Drug Administration (FDA). Hardware disbursements will transpire within these communities to support planetary surface exploration and to serve as an educational tool demonstrating how ground based medicine influenced the technological development of space hardware.

  20. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.