WorldWideScience

Sample records for hardware task migration

  1. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  2. Hardware processors for pattern recognition tasks in experiments with wire chambers

    International Nuclear Information System (INIS)

    Verkerk, C.

    1975-01-01

    Hardware processors for pattern recognition tasks in experiments with multiwire proportional chambers or drift chambers are described. They vary from simple ones used for deciding in real time if particle trajectories are straight to complex ones for recognition of curved tracks. Schematics and block-diagrams of different processors are shown

  3. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Alimonda Andrea

    2008-01-01

    Full Text Available Abstract Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  4. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Andrea Acquaviva

    2008-01-01

    Full Text Available Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  5. The Nicest way to migrate your Windows computer ( The Windows 2000 Migration Task Force)

    CERN Document Server

    2001-01-01

    With Windows 2000, CERN users will discover a more stable and reliable working environment and will have access to all the latest applications. The Windows 2000 Migration Task Force - a representative from each division.

  6. A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Borja Bordel

    2018-03-01

    Full Text Available Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices. On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations.

  7. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  8. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  9. Migration of trochanteric cerclage cable debris to the knee joint

    Directory of Open Access Journals (Sweden)

    Kathleen M. Kollitz, BS

    2014-01-01

    Full Text Available Migrating orthopedic hardware has widely been reported in the literature. Most reported cases of migrating hardware involve smooth Kirschner wires or loosening/fracture of hardware involved with joint stabilization/fixation. It is unusual for hardware to migrate within the soft tissues. In some cases, smooth Kirschner wires have migrated within the thoracic cage—a proposed mechanism for this phenomenon is the negative intrathoracic pressure. While wires have also been reported to gain access to circulation, transporting them over larger distances, the majority of broken or retained wires remain local. We report a case of a 34-year-old man in whom numerous fragments of braided cable migrated from the hip to the knee.

  10. Hardware And Software Architectures For Reconfigurable Time-Critical Control Tasks

    Directory of Open Access Journals (Sweden)

    Adam Piłat

    2007-01-01

    Full Text Available The most popular configuration of the controlled laboratory test-rigs is the personalcomputer (PC equipped with the I/O board. The dedicated software components allowsto conduct a wide range of user-defined tasks. The typical configuration functionality canbe customized by PC hardware components and their programmable reconfiguration. Thenext step in the automatic control system design is the embedded solution. Usually, thedesign process of the embedded control system is supported by the high-level software. Thededicated programming tools support multitasking property of the microcontroller by selectionof different sampling frequencies of algorithm blocks. In this case the multi-layer andmultitasking control strategy can be realized on the chip. The proposed solutions implementrapid prototyping approach. The available toolkits and device drivers integrate system-leveldesign environment and the real-time application software, transferring the functionality ofMATLAB/Simulink programs to PCs or microcontrolers application environment.

  11. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  12. Software-Controlled Dynamically Swappable Hardware Design in Partially Reconfigurable Systems

    Directory of Open Access Journals (Sweden)

    Huang Chun-Hsian

    2008-01-01

    Full Text Available Abstract We propose two basic wrapper designs and an enhanced wrapper design for arbitrary digital hardware circuit designs such that they can be enhanced with the capability for dynamic swapping controlled by software. A hardware design with either of the proposed wrappers can thus be swapped out of the partially reconfigurable logic at runtime in some intermediate state of computation and then swapped in when required to continue from that state. The context data is saved to a buffer in the wrapper at interruptible states, and then the wrapper takes care of saving the hardware context to communication memory through a peripheral bus, and later restoring the hardware context after the design is swapped in. The overheads of the hardware standardization and the wrapper in terms of additional reconfigurable logic resources and the time for context switching are small and generally acceptable. With the capability for dynamic swapping, high priority hardware tasks can interrupt low-priority tasks in real-time embedded systems so that the utilization of hardware space per unit time is increased.

  13. SL(C) 5 migration at CERN

    International Nuclear Information System (INIS)

    Schwickerath, Ulrich; Silva, Ricardo

    2010-01-01

    Most LCG sites are currently running on SL(C)4. However, this operating system is already rather old, and it is becoming difficult to get the required hardware drivers, to get the best out of recent hardware. A possible way out is the migration to SL(C)5 based systems where possible, in combination with virtualization methods. The former is typically possible for nodes where the software to run the services is available and tested, while the latter offers a possibility to make use of the new hardware platforms whilst maintaining operating system compatibility. Since autumn 2008, CERN has offered public interactive and batch worker nodes for evaluation to the experiments. For the Grid environment, access is granted by a dedicated CEs. The status of the evaluation, feedback received from the experiments and the status of the migration will be reviewed, and the status of virtualization of services at CERN will be reported. Beyond this, the migration to a new operating system also offers an excellent opportunity to upgrade the fabric infrastructure used to manage the servers.

  14. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  15. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  16. A Framework for Hardware-Accelerated Services Using Partially Reconfigurable SoCs

    Directory of Open Access Journals (Sweden)

    MACHIDON, O. M.

    2016-05-01

    Full Text Available The current trend towards ?Everything as a Service? fosters a new approach on reconfigurable hardware resources. This innovative, service-oriented approach has the potential of bringing a series of benefits for both reconfigurable and distributed computing fields by favoring a hardware-based acceleration of web services and increasing service performance. This paper proposes a framework for accelerating web services by offloading the compute-intensive tasks to reconfigurable System-on-Chip (SoC devices, as integrated IP (Intellectual Property cores. The framework provides a scalable, dynamic management of the tasks and hardware processing cores, based on dynamic partial reconfiguration of the SoC. We have enhanced security of the entire system by making use of the built-in detection features of the hardware device and also by implementing active counter-measures that protect the sensitive data.

  17. Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators

    DEFF Research Database (Denmark)

    Schou, Casper; Bøgh, Simon; Madsen, Ole

    2014-01-01

    This paper presents concepts and ideas of a hard- ware reconfiguration framework for modular industrial mobile manipulators. Mobile manipulators pose a highly flexible pro- duction resource due to their ability to autonomously navigate between workstations. However, due to this high flexibility new...... approaches to the operation of the robots are needed. Reconfig- uring the robot to a new task should be carried out by shop floor operators and, thus, be both quick and intuitive. Late research has already proposed a method for intuitive robot programming. However, this relies on a predetermined hardware...... configuration. Finding a single multi-purpose hardware configuration suited to all tasks is considered unrealistic. As a result, the need for reconfiguration of the hardware is inevitable. In this paper an outline of a framework for making hardware reconfiguration quick and intuitive is presented. Two main...

  18. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  19. Automation of Flexible Migration Workflows

    Directory of Open Access Journals (Sweden)

    Dirk von Suchodoletz

    2011-03-01

    Full Text Available Many digital preservation scenarios are based on the migration strategy, which itself is heavily tool-dependent. For popular, well-defined and often open file formats – e.g., digital images, such as PNG, GIF, JPEG – a wide range of tools exist. Migration workflows become more difficult with proprietary formats, as used by the several text processing applications becoming available in the last two decades. If a certain file format can not be rendered with actual software, emulation of the original environment remains a valid option. For instance, with the original Lotus AmiPro or Word Perfect, it is not a problem to save an object of this type in ASCII text or Rich Text Format. In specific environments, it is even possible to send the file to a virtual printer, thereby producing a PDF as a migration output. Such manual migration tasks typically involve human interaction, which may be feasible for a small number of objects, but not for larger batches of files.We propose a novel approach using a software-operated VNC abstraction layer in order to replace humans with machine interaction. Emulators or virtualization tools equipped with a VNC interface are very well suited for this approach. But screen, keyboard and mouse interaction is just part of the setup. Furthermore, digital objects need to be transferred into the original environment in order to be extracted after processing. Nevertheless, the complexity of the new generation of migration services is quickly rising; a preservation workflow is now comprised not only of the migration tool itself, but of a complete software and virtual hardware stack with recorded workflows linked to every supported migration scenario. Thus the requirements of OAIS management must include proper software archiving, emulator selection, system image and recording handling. The concept of view-paths could help either to automatically determine the proper pre-configured virtual environment or to set up system

  20. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  1. Lessons learned from hardware and software upgrades of IT-DB services

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of recent changes in CERN database infrastructure. The presentation describes database service evolution, in particular new hardware & storage installation, integration with Agile infrastructure, complexity of validation strategy and finally the migration and upgrade process concerning the most critical database services.

  2. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  3. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  4. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  5. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  6. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  7. Hardware task-status manager for an RTOS with FIFO communication

    NARCIS (Netherlands)

    Zaykov, P.G.; Kuzmanov, G.; Molnos, A.M.; Goossens, K.G.W.

    2014-01-01

    In this paper, we address the problem of improving the performance of real-time embedded Multiprocessor System-on-Chip (MPSoC). Such MPSoCs often execute data-flow applications composed of multiple tasks, which communicate through First-In-First-Out (FIFO) queues. The tasks on each processor in the

  8. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  9. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  10. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  11. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Marcello Benedetti

    2017-11-01

    Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  12. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  13. Space station common module power system network topology and hardware development

    Science.gov (United States)

    Landis, D. M.

    1985-01-01

    Candidate power system newtork topologies for the space station common module are defined and developed and the necessary hardware for test and evaluation is provided. Martin Marietta's approach to performing the proposed program is presented. Performance of the tasks described will assure systematic development and evaluation of program results, and will provide the necessary management tools, visibility, and control techniques for performance assessment. The plan is submitted in accordance with the data requirements given and includes a comprehensive task logic flow diagram, time phased manpower requirements, a program milestone schedule, and detailed descriptions of each program task.

  14. Dedicated memory structure holding data for detecting available worker thread(s) and informing available worker thread(s) of task(s) to execute

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, George L.; Eichenberger, Alexandre E.; O' Brien, John K. P.

    2016-12-13

    The present disclosure relates generally to a dedicated memory structure (that is, hardware device) holding data for detecting available worker thread(s) and informing available worker thread(s) of task(s) to execute.

  15. Trainable hardware for dynamical computing using error backpropagation through physical media.

    Science.gov (United States)

    Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-24

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  16. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  17. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  18. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  19. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  20. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju

    2006-01-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study

  1. Interface Testing for RTOS System Tasks based on the Run-Time Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Ahyoung; Choi, Byoungju [Ewha University, Seoul (Korea, Republic of)

    2006-07-01

    Safety critical embedded system requires high dependability of not only hardware but also software. It is intricate to modify embedded software once embedded. Therefore, it is necessary to have rigorous regulations to assure the quality of safety critical embedded software. IEEE V and V (Verification and Validation) process is recommended for software dependability, but a more quantitative evaluation method like software testing is necessary. In case of safety critical embedded software, it is essential to have a test that reflects unique features of the target hardware and its operating system. The safety grade PLC (Programmable Logic Controller) is a safety critical embedded system where hardware and software are tightly coupled. The PLC has HdS (Hardware dependent Software) and it is tightly coupled with RTOS (Real Time Operating System). Especially, system tasks that are tightly coupled with target hardware and RTOS kernel have large influence on the dependability of the entire PLC. Therefore, interface testing for system tasks that reflects the features of target hardware and RTOS kernel becomes the core of the PLC integration test. Here, we define interfaces as overlapped parts between two different layers on the system architecture. In this paper, we identify interfaces for system tasks and apply the identified interfaces to the safety grade PLC. Finally, we show the test results through the empirical study.

  2. Migration to Alma/Primo: A Case Study of Central Washington University

    Directory of Open Access Journals (Sweden)

    Ping Fu

    2015-12-01

    Full Text Available This paper describes how Central Washington University Libraries (CWUL interacted and collaborated with the Orbis Cascade Alliance (OCA Shared Integrated Library System’s (SILS Implementation Team and Ex Libris to process systems and data migration from Innovative Interfaces Inc.’s Millennium integrated library system to Alma/Primo, Ex Libris’ next-generation library management solution and discovery and delivery solution. A chronological review method was used for this case study to provide an overall picture of key migration events, tasks, and implementation efforts, including pre-migration cleanup, migration forms, integration with external systems, testing, cutover, post-migration cleanup, and reporting and fixing outstanding issues. A three-phase migration model was studied, and a questionnaire was designed to collect data from functional leads to determine staff time spent on the migration tasks. Staff time spent on each phase was analyzed and quantitated, with some top essential elements for the success of the migration identified through the case review and data analysis. An analysis of the Ex Libris’ Salesforce cases created during the migration and post-migration was conducted to be used for identifying roles of key librarians and staff functional leads during the migration.

  3. Kokkos' Task DAG Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Harold C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ibanez, Daniel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Task DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.

  4. Biomedical applications engineering tasks

    Science.gov (United States)

    Laenger, C. J., Sr.

    1976-01-01

    The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.

  5. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  6. Task Migration for Fault-Tolerance in Mixed-Criticality Embedded Systems

    DEFF Research Database (Denmark)

    Saraswat, Prabhat Kumar; Pop, Paul; Madsen, Jan

    2009-01-01

    In this paper we are interested in mixed-criticality embedded applications implemented on distributed architectures. Depending on their time-criticality, tasks can be hard or soft real-time and regarding safety-criticality, tasks can be fault-tolerant to transient faults, permanent faults, or have...... processors, such that the faults are tolerated, the deadlines for the hard real-time tasks are satisfied and the QoS for soft tasks is maximized. The proposed online adaptive approach has been evaluated using several synthetic benchmarks and a real-life case study....... no dependability requirements. We use Earliest Deadline First (EDF) scheduling for the hard tasks and the Constant Bandwidth Server (CBS) for the soft tasks. The CBS parameters determine the quality of service (QoS) of soft tasks. Transient faults are tolerated using checkpointing with roll- back recovery...

  7. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  8. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    Science.gov (United States)

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  9. SiMA: A simplified migration assay for analyzing neutrophil migration.

    Science.gov (United States)

    Weckmann, Markus; Becker, Tim; Nissen, Gyde; Pech, Martin; Kopp, Matthias V

    2017-07-01

    In lung inflammation, neutrophils are the first leukocytes migrating to an inflammatory site, eliminating pathogens by multiple mechanisms. The term "migration" describes several stages of neutrophil movement to reach the site of inflammation, of which the passage of the interstitium and basal membrane of the airway are necessary to reach the site of bronchial inflammation. Currently, several methods exist (e.g., Boyden Chamber, under-agarose assay, or microfluidic systems) to assess neutrophil mobility. However, these methods do not allow for parameterization on single cell level, that is, the individual neutrophil pathway analysis is still considered challenging. This study sought to develop a simplified yet flexible method to monitor and quantify neutrophil chemotaxis by utilizing commercially available tissue culture hardware, simple video microscopic equipment and highly standardized tracking. A chemotaxis 3D µ-slide (IBIDI) was used with different chemoattractants [interleukin-8 (IL-8), fMLP, and Leukotriene B4 (LTB 4 )] to attract neutrophils in different matrices like Fibronectin (FN) or human placental matrix. Migration was recorded for 60 min using phase contrast microscopy with an EVOS ® FL Cell Imaging System. The images were normalized and texture based image segmentation was used to generate neutrophil trajectories. Based on these spatio-temporal information a comprehensive parameter set is extracted from each time series describing the neutrophils motility, including velocity and directness and neutrophil chemotaxis. To characterize the latter one, a sector analysis was employed enabling the quantification of the neutrophils response to the chemoattractant. Using this hard- and software framework we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications (Prednisolone). Additionally, a comparison of four asthmatic and

  10. AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics

    Science.gov (United States)

    2016-09-27

    fold was extracted when applying architecture -aware GPU optimizations, resulting in a 371-fold speed-up. By also leveraging algorithmic innovation...mind the strengths of the underlying hardware architecture . Some examples include a block-sparse linear solver. • Characterization of performance...GPU, NVIDIA GPU, and Intel Xeon Phi . • Creation of a prototypical runtime system called CoreTSAR, short for Core Task-Size Adapting Runtime, that

  11. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  12. Multicore Considerations for Legacy Flight Software Migration

    Science.gov (United States)

    Vines, Kenneth; Day, Len

    2013-01-01

    In this paper we will discuss potential benefits and pitfalls when considering a migration from an existing single core code base to a multicore processor implementation. The results of this study present options that should be considered before migrating fault managers, device handlers and tasks with time-constrained requirements to a multicore flight software environment. Possible future multicore test bed demonstrations are also discussed.

  13. Practical Data Migration

    CERN Document Server

    Morris, Johny

    2012-01-01

    This book is for executives and practitioners tasked with the movement of data from old systems to a new repository. It uses a series of steps guaranteed to get the reader from an empty new system to one that is working and backed by the user population. Using this proven methodology will vastly increase the chances of a successful migration.

  14. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  15. Semantics-Driven Migration of Java Programs: a Practical Experience

    Directory of Open Access Journals (Sweden)

    Artyom O. Aleksyuk

    2017-01-01

    Full Text Available The purpose of the study is to demonstrate the feasibility of automated code migration to a new set of programming libraries. Code migration is a common task in modern software projects. For example, it may arise when a project should be ported to a more secure or feature-rich library, a new platform or a new version of an already used library. The developed method and tool are based on the previously created by the authors a formalism for describing libraries semantics. The formalism specifies a library behaviour by using a system of extended finite state machines (EFSM. This paper outlines the metamodel designed to specify library descriptions and proposes an easy to use domainspecific language (DSL, which can be used to define models for particular libraries. The mentioned metamodel directly forms the code migration procedure. A process of migration is split into five steps, and each step is also described in the paper. The procedure uses an algorithm based on the breadth- first search extended for the needs of the migration task. Models and algorithms were implemented in the prototype of an automated code migration tool. The prototype was tested by both artificial code examples and a real-world open source project. The article describes the experiments performed, the difficulties that have arisen in the process of migration of test samples, and how they are solved in the proposed procedure. The results of experiments indicate that code migration can be successfully automated. 

  16. Task Decomposition in Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory

    2014-06-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  17. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  18. Coherent visualization of spatial data adapted to roles, tasks, and hardware

    Science.gov (United States)

    Wagner, Boris; Peinsipp-Byma, Elisabeth

    2012-06-01

    Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).

  19. HwPMI: An Extensible Performance Monitoring Infrastructure for Improving Hardware Design and Productivity on FPGAs

    Directory of Open Access Journals (Sweden)

    Andrew G. Schmidt

    2012-01-01

    Full Text Available Designing hardware cores for FPGAs can quickly become a complicated task, difficult even for experienced engineers. With the addition of more sophisticated development tools and maturing high-level language-to-gates techniques, designs can be rapidly assembled; however, when the design is evaluated on the FPGA, the performance may not be what was expected. Therefore, an engineer may need to augment the design to include performance monitors to better understand the bottlenecks in the system or to aid in the debugging of the design. Unfortunately, identifying what to monitor and adding the infrastructure to retrieve the monitored data can be a challenging and time-consuming task. Our work alleviates this effort. We present the Hardware Performance Monitoring Infrastructure (HwPMI, which includes a collection of software tools and hardware cores that can be used to profile the current design, recommend and insert performance monitors directly into the HDL or netlist, and retrieve the monitored data with minimal invasiveness to the design. Three applications are used to demonstrate and evaluate HwPMI’s capabilities. The results are highly encouraging as the infrastructure adds numerous capabilities while requiring minimal effort by the designer and low resource overhead to the existing design.

  20. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  1. Fingerprint Sensors: Liveness Detection Issue and Hardware based Solutions

    Directory of Open Access Journals (Sweden)

    Shahzad Memon

    2012-01-01

    Full Text Available Securing an automated and unsupervised fingerprint recognition system is one of the most critical and challenging tasks in government and commercial applications. In these systems, the detection of liveness of a finger placed on a fingerprint sensor is a major issue that needs to be addressed in order to ensure the credibility of the system. The main focus of this paper is to review the existing fingerprint sensing technologies in terms of liveness detection and discusses hardware based ‘liveness detection’ techniques reported in the literature for automatic fingerprint biometrics.

  2. Automated Tracking of Cell Migration with Rapid Data Analysis.

    Science.gov (United States)

    DuChez, Brian J

    2017-09-01

    Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  3. Migration from a terrestrial network to a satellite network - Risks/constraints/payoffs

    Science.gov (United States)

    Homon, K. A.

    The migration method is regarded as a straightforward approach to a customer's requirements while addressing the known constraints. The migration plan organizes, controls, and communicates the extensive number of tasks, schedules, technical details, and node-by-node conversion details using the selected migration method as its cornerstone. It is noted that a successful migration plan must also provide the flexibility and robustness necessary to handle the unforeseen changes that will occur over the three- to four-year migration period. A baseline network provides a solid structural underpinning for the migration plan.

  4. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  5. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  6. Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA

    Directory of Open Access Journals (Sweden)

    Beau Tippetts

    2014-01-01

    Full Text Available A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.

  7. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E.

    2017-01-01

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors' systems, while maintaining numerical accuracy.

  8. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.

    2017-09-29

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.

  9. Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    KAUST Repository

    Sukkari, Dalal

    2016-10-25

    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.

  10. Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach

    Science.gov (United States)

    Smith, James A.

    2010-01-01

    The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.

  11. Repeat migration and disappointment.

    Science.gov (United States)

    Grant, E K; Vanderkamp, J

    1986-01-01

    This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.

  12. Avian Alert - a bird migration early warning system

    OpenAIRE

    van Gasteren, H.; Shamoun-Baranes, J.; Ginati, A.; Garofalo, G.

    2008-01-01

    Every year billions of birds migrate from breeding areas to their wintering ranges, some travelling over 10,000 km. Stakeholders interested in aviation flight safety, spread of disease, conservation, education, urban planning, meteorology, wind turbines and bird migration ecology are interested in information on bird movements. Collecting and disseminating useful information about such mobile creatures exhibiting diverse behaviour is no simple task. However, ESA’s Integrated Application Promo...

  13. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  14. Gas migration through cement slurries analysis: A comparative laboratory study

    Directory of Open Access Journals (Sweden)

    Arian Velayati

    2015-12-01

    Full Text Available Cementing is an essential part of every drilling operation. Protection of the wellbore from formation fluid invasion is one of the primary tasks of a cement job. Failure in this task results in catastrophic events, such as blow outs. Hence, in order to save the well and avoid risky and operationally difficult remedial cementing, slurry must be optimized to be resistant against gas migration phenomenon. In this paper, performances of the conventional slurries facing gas invasion were reviewed and compared with modified slurry containing special gas migration additive by using fluid migration analyzer device. The results of this study reveal the importance of proper additive utilization in slurry formulations. The rate of gas flow through the slurry in neat cement is very high; by using different types of additives, we observe obvious changes in the performance of the cement system. The rate of gas flow in neat class H cement was reported as 36000 ml/hr while the optimized cement formulation with anti-gas migration and thixotropic agents showed a gas flow rate of 13.8 ml/hr.

  15. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  16. Overview of job and task analysis

    International Nuclear Information System (INIS)

    Gertman, D.I.

    1984-01-01

    During the past few years the nuclear industry has become concerned with predicting human performance in nuclear power plants. One of the best means available at the present time to make sure that training, procedures, job performance aids and plant hardware match the capabilities and limitations of personnel is by performing a detailed analysis of the tasks required in each job position. The approved method for this type of analysis is referred to as job or task analysis. Job analysis is a broader type of analysis and is usually thought of in terms of establishing overall performance objectives, and in establishing a basis for position descriptions. Task analysis focuses on the building blocks of task performance, task elements, and places them within the context of specific performance requirements including time to perform, feedback required, special tools used, and required systems knowledge. The use of task analysis in the nuclear industry has included training validation, preliminary risk screening, and procedures development

  17. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  18. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  19. PARTICAL SWARM OPTIMIZATION OF TASK SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    Payal Jaglan*, Chander Diwakar

    2016-01-01

    Resource provisioning and pricing modeling in cloud computing makes it an inevitable technology both on developer and consumer end. Easy accessibility of software and freedom of hardware configuration increase its demand in IT industry. It’s ability to provide a user-friendly environment, software independence, quality, pricing index and easy accessibility of infrastructure via internet. Task scheduling plays an important role in cloud computing systems. Task scheduling in cloud computing mea...

  20. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  1. NCRP soil contamination task group

    International Nuclear Information System (INIS)

    Jacobs, D.G.

    1987-01-01

    The National Council of Radiation Protection and Measurements (NCRP) has recently established a Task Group on Soil Contamination to describe and evaluate the migration pathways and modes of radiation exposure that can potentially arise due to radioactive contamination of soil. The purpose of this paper is to describe the scientific principles for evaluation of soil contamination which can be used as a basis for derivation of soil contamination limits for specific situations. This paper describes scenarios that can lead to soil contamination, important characteristics of soil contamination, the subsequent migration pathways and exposure modes, and the application of principles in the report in deriving soil contamination limits. The migration pathways and exposure modes discussed in this paper include: direct radiation exposure; and exhalation of gases

  2. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  3. Software and Hardware control of a hybrid robot for switching between leg-type and wheel-type modes

    OpenAIRE

    Botelho, Wagner Tanaka; Okada, Tokuji; Mahmoud, Abeer; Shimizu, Toshimi

    2011-01-01

    One of the objectives of the paper is to describe the hybrid robot PEOPLER-II (Perpendicularly Oriented Planetary Legged Robot) with regard to switching between leg-type and wheel-type. Our robot has an easier design and control system than other hybrid robots. The software and hardware control in the process of performing five robot tasks are considered. These are the walking, rolling, switching, turning and spinning. In the switching task, we show the control method based on minimization of...

  4. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  5. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  6. Shielded battery syndrome: a new hardware complication of deep brain stimulation.

    Science.gov (United States)

    Chelvarajah, Ramesh; Lumsden, Daniel; Kaminska, Margaret; Samuel, Michael; Hulse, Natasha; Selway, Richard P; Lin, Jean-Pierre; Ashkan, Keyoumars

    2012-01-01

    Deep brain stimulation hardware is constantly advancing. The last few years have seen the introduction of rechargeable cell technology into the implanted pulse generator design, allowing for longer battery life and fewer replacement operations. The Medtronic® system requires an additional pocket adaptor when revising a non-rechargeable battery such as their Kinetra® to their rechargeable Activa® RC. This additional hardware item can, if it migrates superficially, become an impediment to the recharging of the battery and negate the intended technological advance. To report the emergence of the 'shielded battery syndrome', which has not been previously described. We reviewed our deep brain stimulation database to identify cases of recharging difficulties reported by patients with Activa RC implanted pulse generators. Two cases of shielded battery syndrome were identified. The first required surgery to reposition the adaptor to the deep aspect of the subcutaneous pocket. In the second case, it was possible to perform external manual manipulation to restore the adaptor to its original position deep to the battery. We describe strategies to minimise the occurrence of the shielded battery syndrome and advise vigilance in all patients who experience difficulty with recharging after replacement surgery of this type for the implanted pulse generator. Copyright © 2012 S. Karger AG, Basel.

  7. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  8. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  9. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  10. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  11. TASK ALLOCATION IN GEO-DISTRIBUTATED CYBER-PHYSICAL SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Aggarwal, Rachel; Smidts, Carol

    2017-03-01

    This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay in communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.

  12. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  13. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  14. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  15. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  16. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  17. Modeling and Design of Fault-Tolerant and Self-Adaptive Reconfigurable Networked Embedded Systems

    Directory of Open Access Journals (Sweden)

    Jürgen Teich

    2006-06-01

    Full Text Available Automotive, avionic, or body-area networks are systems that consist of several communicating control units specialized for certain purposes. Typically, different constraints regarding fault tolerance, availability and also flexibility are imposed on these systems. In this article, we will present a novel framework for increasing fault tolerance and flexibility by solving the problem of hardware/software codesign online. Based on field-programmable gate arrays (FPGAs in combination with CPUs, we allow migrating tasks implemented in hardware or software from one node to another. Moreover, if not enough hardware/software resources are available, the migration of functionality from hardware to software or vice versa is provided. Supporting such flexibility through services integrated in a distributed operating system for networked embedded systems is a substantial step towards self-adaptive systems. Beside the formal definition of methods and concepts, we describe in detail a first implementation of a reconfigurable networked embedded system running automotive applications.

  18. Test system design for Hardware-in-Loop evaluation of PEM fuel cells and auxiliaries

    Energy Technology Data Exchange (ETDEWEB)

    Randolf, Guenter; Moore, Robert M. [Hawaii Natural Energy Institute, University of Hawaii, Honolulu, HI (United States)

    2006-07-14

    In order to evaluate the dynamic behavior of proton exchange membrane (PEM) fuel cells and their auxiliaries, the dynamic capability of the test system must exceed the dynamics of the fastest component within the fuel cell or auxiliary component under test. This criterion is even more critical when a simulated component of the fuel cell system (e.g., the fuel cell stack) is replaced by hardware and Hardware-in-Loop (HiL) methodology is employed. This paper describes the design of a very fast dynamic test system for fuel cell transient research and HiL evaluation. The integration of the real time target (which runs the simulation), the test stand PC (that controls the operation of the test stand), and the programmable logic controller (PLC), for safety and low-level control tasks, into one single integrated unit is successfully completed. (author)

  19. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  20. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  1. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  2. QA for helical tomotherapy: Report of the AAPM Task Group 148

    Energy Technology Data Exchange (ETDEWEB)

    Langen, Katja M.; Papanikolaou, Niko; Balog, John; Crilly, Richard; Followill, David; Goddu, S. Murty; Grant, Walter III; Olivera, Gustavo; Ramsey, Chester R.; Shi Chengyu [Department of Radiation Oncology, M. D. Anderson Cancer Center Orlando, Orlando, Florida 32806 (United States); Department of Radiation Oncology, Cancer Therapy and Research Center, University of Texas Health Science Center at San Antonio, San Antonio, Texas 78229 (United States); Mohawk Valley Medical Physics, Rome, New York 13440 (United States); Department of Radiation Medicine, Oregon Health and Science University, Portland, Oregon 97239 (United States); Section of Outreach Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Department of Radiology/Section of Radiation Oncology, Baylor College of Medicine, Methodist Hospital, Houston, Texas 77030 (United States); TomoTherapy, Inc., Madison, Wisconsin 53717 and Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53706 (United States); Thompson Cancer Survival Center, Knoxville, Tennessee 37916 (United States); Department of Radiation Oncology, Cancer Therapy and Research Center, University of Texas Health Science Center at San Antonio, San Antonio, Texas 78229 (United States)

    2010-09-15

    Helical tomotherapy is a relatively new modality with integrated treatment planning and delivery hardware for radiation therapy treatments. In view of the uniqueness of the hardware design of the helical tomotherapy unit and its implications in routine quality assurance, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 148 to review this modality and make recommendations for quality assurance related methodologies. The specific objectives of this Task Group are: (a) To discuss quality assurance techniques, frequencies, and tolerances and (b) discuss dosimetric verification techniques applicable to this unit. This report summarizes the findings of the Task Group and aims to provide the practicing clinical medical physicist with the insight into the technology that is necessary to establish an independent and comprehensive quality assurance program for a helical tomotherapy unit. The emphasis of the report is to describe the rationale for the proposed QA program and to provide example tests that can be performed, drawing from the collective experience of the task group members and the published literature. It is expected that as technology continues to evolve, so will the test procedures that may be used in the future to perform comprehensive quality assurance for helical tomotherapy units.

  3. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  4. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  5. Task 9. Deployment of photovoltaic technologies: co-operation with developing countries. The role of quality management, hardware certification and accredited training in PV programmes in developing countries

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, M. C. [Institute for Sustainable Power, Highlands Ranch, CO (United States); Oldach, R.; Bates, J. [IT Power Ltd, The Manor house, Chineham (United Kingdom)

    2003-09-15

    This report for the International Energy Agency (IEA) made by Task 9 of the Photovoltaic Power Systems (PVPS) programme takes a look at the role of quality management, hardware certification and accredited training in PV programmes in developing countries. The objective of this document is to provide assistance to those project developers that are interested in implementing or improving support programmes for the deployment of PV systems for rural electrification. It is to enable them to address and implement quality assurance measures, with an emphasis on management, technical and training issues and other factors that should be considered for the sustainable implementation of rural electrification programmes. It is considered important that quality also addresses the socio-economic and the socio-technical aspects of a programme concept. The authors summarise that, for a PV programme, there are three important areas of quality control to be implemented: quality management, technical standards and quality of training.

  6. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  7. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  8. Human Factors Considerations in Migration of Unmanned Aircraft System (UAS) Operator Control

    National Research Council Canada - National Science Library

    Tvaryanas, Anthony P

    2006-01-01

    ..., or both. There are potential advantages to control migration to include mitigating operator vigilance decrements and fatigue, facilitating operator task specialization, and optimizing workload during multi...

  9. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  10. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  11. Task Classification Based Energy-Aware Consolidation in Clouds

    Directory of Open Access Journals (Sweden)

    HeeSeok Choi

    2016-01-01

    Full Text Available We consider a cloud data center, in which the service provider supplies virtual machines (VMs on hosts or physical machines (PMs to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive and resource utilization (e.g., CPU and RAM. Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA. Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA achieves a significant energy reduction without incurring predefined SLA violations.

  12. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  13. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  14. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  15. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  16. Real-time scheduling of software tasks

    International Nuclear Information System (INIS)

    Hoff, L.T.

    1995-01-01

    When designing real-time systems, it is often desirable to schedule execution of software tasks based on the occurrence of events. The events may be clock ticks, interrupts from a hardware device, or software signals from other software tasks. If the nature of the events, is well understood, this scheduling is normally a static part of the system design. If the nature of the events is not completely understood, or is expected to change over time, it may be necessary to provide a mechanism for adjusting the scheduling of the software tasks. RHIC front-end computers (FECs) provide such a mechanism. The goals in designing this mechanism were to be as independent as possible of the underlying operating system, to allow for future expansion of the mechanism to handle new types of events, and to allow easy configuration. Some considerations which steered the design were programming paradigm (object oriented vs. procedural), programming language, and whether events are merely interesting moments in time, or whether they intrinsically have data associated with them. The design also needed to address performance and robustness tradeoffs involving shared task contexts, task priorities, and use of interrupt service routine (ISR) contexts vs. task contexts. This paper will explore these considerations and tradeoffs

  17. The Effect of Predicted Vehicle Displacement on Ground Crew Task Performance and Hardware Design

    Science.gov (United States)

    Atencio, Laura Ashley; Reynolds, David W.

    2011-01-01

    NASA continues to explore new launch vehicle concepts that will carry astronauts to low- Earth orbit to replace the soon-to-be retired Space Transportation System (STS) shuttle. A tall vertically stacked launch vehicle (> or =300 ft) is exposed to the natural environment while positioned on the launch pad. Varying directional winds and vortex shedding cause the vehicle to sway in an oscillating motion. Ground crews working high on the tower and inside the vehicle during launch preparations will be subjected to this motion while conducting critical closeout tasks such as mating fluid and electrical connectors and carrying heavy objects. NASA has not experienced performing these tasks in such environments since the Saturn V, which was serviced from a movable (but rigid) service structure; commercial launchers are likewise attended by a service structure that moves away from the vehicle for launch. There is concern that vehicle displacement may hinder ground crew operations, impact the ground system designs, and ultimately affect launch availability. The vehicle sway assessment objective is to replicate predicted frequencies and displacements of these tall vehicles, examine typical ground crew tasks, and provide insight into potential vehicle design considerations and ground crew performance guidelines. This paper outlines the methodology, configurations, and motion testing performed while conducting the vehicle displacement assessment that will be used as a Technical Memorandum for future vertically stacked vehicle designs.

  18. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  19. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  20. A dual-task investigation of automaticity in visual word processing

    Science.gov (United States)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  1. Real-time multi-task operators support system

    International Nuclear Information System (INIS)

    Wang He; Peng Minjun; Wang Hao; Cheng Shouyu

    2005-01-01

    The development in computer software and hardware technology and information processing as well as the accumulation in the design and feedback from Nuclear Power Plant (NPP) operation created a good opportunity to develop an integrated Operator Support System. The Real-time Multi-task Operator Support System (RMOSS) has been built to support the operator's decision making process during normal and abnormal operations. RMOSS consists of five system subtasks such as Data Collection and Validation Task (DCVT), Operation Monitoring Task (OMT), Fault Diagnostic Task (FDT), Operation Guideline Task (OGT) and Human Machine Interface Task (HMIT). RMOSS uses rule-based expert system and Artificial Neural Network (ANN). The rule-based expert system is used to identify the predefined events in static conditions and track the operation guideline through data processing. In dynamic status, Back-Propagation Neural Network is adopted for fault diagnosis, which is trained with the Genetic Algorithm. Embedded real-time operation system VxWorks and its integrated environment Tornado II are used as the RMOSS software cross-development. VxGUI is used to design HMI. All of the task programs are designed in C language. The task tests and function evaluation of RMOSS have been done in one real-time full scope simulator. Evaluation results show that each task of RMOSS is capable of accomplishing its functions. (authors)

  2. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  3. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  4. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  5. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  6. A Review on Migration Methods in B-Scan Ground Penetrating Radar Imaging

    Directory of Open Access Journals (Sweden)

    Caner Özdemir

    2014-01-01

    Full Text Available Even though ground penetrating radar has been well studied and applied by many researchers for the last couple of decades, the focusing problem in the measured GPR images is still a challenging task. Although there are many methods offered by different scientists, there is not any complete migration/focusing method that works perfectly for all scenarios. This paper reviews the popular migration methods of the B-scan GPR imaging that have been widely accepted and applied by various researchers. The brief formulation and the algorithm steps for the hyperbolic summation, the Kirchhoff migration, the back-projection focusing, the phase-shift migration, and the ω-k migration are presented. The main aim of the paper is to evaluate and compare the migration algorithms over different focusing methods such that the reader can decide which algorithm to use for a particular application of GPR. Both the simulated and the measured examples that are used for the performance comparison of the presented algorithms are provided. Other emerging migration methods are also pointed out.

  7. Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing

    OpenAIRE

    Rask, Ulf; Mannestig, Pontus

    2002-01-01

    In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market sha...

  8. The hardware track finder processor in CMS at CERN

    International Nuclear Information System (INIS)

    Kluge, A.

    1997-07-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS at CERN/Geneva. The task of this processor is to identify muons and to measure their transverse momentum. The Track Finder makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data analysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and to measure their transverse momentum. Each 25 ns a new data set is generated. Measurement of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC. The classical method in high energy physics experiments is to employ a pattern comparison method. The predefined patterns are compared to the found patterns. The high number of data channels and the complex requirements to the spatial detector resolution do not permit to employ a pattern comparison method. A so called track following algorithm was designed, which is able to assemble complete tracks through the whole detector starting from single track segments. Instead of storing a high number of track patterns the problem is brought back to the algorithm level. Comprehensive simulations, employing the hardware simulation language VHDL, were conducted in order to optimize the algorithm and its hardware implementation. A FPGA (field program able gate array)-prototype was designed. A feasibility study to implement the track finder processor employing ASICs was conducted. (author)

  9. How to create successful Open Hardware projects - About White Rabbits and open fields

    CERN Document Server

    van der Bij, E; Lewis, J; Stana, T; Wlostowski, T; Gousiou, E; Serrano, J; Arruat, M; Lipinski, M M; Daniluk, G; Voumard, N; Cattin, M

    2013-01-01

    CERN's accelerator control group has embraced "Open Hardware" (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  10. Computerized management of plant intervention tasks

    International Nuclear Information System (INIS)

    Quoidbach, G.

    2004-01-01

    The main objective of the 'Computerized Management of Plant Intervention Tasks' is to help the staff of a nuclear or a conventional power plant or of any other complex industrial facility (chemical industries, refineries, and so on) in planning, organizing, and carrying out any (preventive or corrective) maintenance task. This 'Computerized Management of Plant Intervention Tasks' is organized around a data base of all plant components in the facility that might be subjected to maintenance or tagout. It allows to manage, by means of intelligent and configurable 'mail service', the course of the intervention requests as well as various treatments of those requests, in a safe and efficient way, adapted to each particular organization. The concept of 'Computerized Management' of plant intervention tasks was developed by BELGATOM in 1983 for the Belgian nuclear power plants of ELECTRABEL. A first implementation of this concept was made at that time for the Doel NPP under the name POPIT (Programming Of Plant Intervention Tasks). In 1988, it was decided to proceed to a functional upgrade of the application, using advanced software and hardware techniques and products, and to realize a second implementation in the Tihange NPP under the name ACM (Application Consignation Maintenance). (author)

  11. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  12. One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring, PhD

    2014-09-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.

  13. Quantitative elastic migration. Applications to 3D borehole seismic surveys; Migration elastique quantitative. Applications a la sismique de puits 3D

    Energy Technology Data Exchange (ETDEWEB)

    Clochard, V.

    1998-12-02

    3D VSP imaging is nowadays a strategic requirement by petroleum companies. It is used to precise in details the geology close to the well. Because of the lack of redundancy and limited coverage in the data. this kind of technology is more restrictive than surface seismic which allows an investigation at a higher scale. Our contribution was to develop an elastic quantitative imagine (GRT migration) which can be applied to 3 components borehole dataset. The method is similar to the Kirchhoff migration using sophistical weighting of the seismic amplitudes. In reality. GRT migration uses pre-calculated Green functions (travel time. amplitude. polarization). The maps are obtained by 3D ray tracing (wavefront construction) in the velocity model. The migration algorithm works with elementary and independent tasks. which is useful to process different kind of dataset (fixed or moving geophone antenna). The study has been followed with validations using asymptotic analytical solution. The ability of reconstruction in 3D borehole survey has been tested in the Overthrust synthetic model. The application to a real circular 3D VSP shows various problems like velocity model building, anisotropy factor and the preprocessing (deconvolution. wave mode separation) which can destroy seismic amplitudes. An isotropic 3 components preprocessing of the whole dataset allows a better lateral reconstruction. The choice of a big migration aperture can help the reconstruction of strong geological dip in spite of migration smiles. Finally, the methodology can be applied to PS converted waves. (author)

  14. Migration of ATLAS PanDA to CERN

    International Nuclear Information System (INIS)

    Stewart, Graeme Andrew; Klimentov, Alexei; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; De Castro Faria Salgado, Pedro Emanuel; Wenaus, Torre; Koblitz, Birger; Lamanna, Massimo

    2010-01-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  15. GOSH! A roadmap for open-source science hardware

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  16. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  17. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    Science.gov (United States)

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  18. Automated personnel data base system specifications, Task V. Final report

    International Nuclear Information System (INIS)

    Bartley, H.J.; Bocast, A.K.; Deppner, F.O.; Harrison, O.J.; Kraas, I.W.

    1978-11-01

    The full title of this study is 'Development of Qualification Requirements, Training Programs, Career Plans, and Methodologies for Effective Management and Training of Inspection and Enforcement Personnel.' Task V required the development of an automated personnel data base system for NRC/IE. This system is identified as the NRC/IE Personnel, Assignment, Qualifications, and Training System (PAQTS). This Task V report provides the documentation for PAQTS including the Functional Requirements Document (FRD), the Data Requirements Document (DRD), the Hardware and Software Capabilities Assessment, and the Detailed Implementation Schedule. Specific recommendations to facilitate implementation of PAQTS are also included

  19. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  20. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  1. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  2. Intersection points for the driving of applier processes of the hardware control of the ZEUS forward detector

    International Nuclear Information System (INIS)

    Siemon, T.

    1992-08-01

    The ZEUS forward detector is built of drift- and transition-radiation chambers which are supported by many peripheral devices. The resulting complex system has to be monitored and controlled continously to preserve safety and to achieve optimal performance. For this task a Hardware-Control-System (HWC) has been developed. Ten VME and OS9-based microprocessors which are connected by Ethernet and VME-bus are provided to run the control- and monitoring tasks. Special attention has been paid to the development of efficient user-interfaces: RDT, an object-oriented database-toolkit, serves as an interface to the data of the HWC. The concept and the usage of this interface are outlined. Finally special features that may be useful for other applications are discussed. (orig.) [de

  3. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  4. How to create successful Open Hardware projects — About White Rabbits and open fields

    International Nuclear Information System (INIS)

    Bij, E van der; Arruat, M; Cattin, M; Daniluk, G; Cobas, J D Gonzalez; Gousiou, E; Lewis, J; Lipinski, M M; Serrano, J; Stana, T; Voumard, N; Wlostowski, T

    2013-01-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields

  5. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  6. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  7. Effects of humic substances on the migration of radionuclides: Complexation of actinides with humic substances. 3. Progress report

    International Nuclear Information System (INIS)

    Kim, J.I.; Rhee, D.S.; Buckau, G.; Moulin, V.; Tits, J.; Decambox, P.; Franz, C.; Herrmann, G.; Trautmann, N.; Dierckx, A.; Vancluysen, J.; Maes, A.

    1993-03-01

    The aim of the present research programme is to study the complexation behaviour of actinide ions with humic substances in natural aquifer systems and hence to quantify the effect of humic substances on the actinide migration. Aquatic humic substances commonly found in all groundwaters in different concentrations have a strong tendency towards complexation with actinide ions. This is one of the major geochemical reactions but hitherto least quantified. Therefore, the effect of humic substances on the actinide migration is poorly understood. In the present research programme the complexation of actinide ions with humic substances will be described thermodynamically. This description will be based on a model being as simple as possible to allow an easy introduction of the resulting reaction constants into geochemical modelling of the actinide migration. This programme is a continuation of the activities of the COCO group in the second phase of the CEC-MIRAGE project. The programme consists of the following three main tasks: Task 1: Complexation reactions of actinide ions with well characterized reference and site-specific humic and fulvic acids; Task 2: Competition reactions with major cations in natural groundwaters; Task 3: Validation of the complexation data in natural aquatic systems by comparison of calculation with spectroscopic experiment. (orig./EF)

  8. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  9. Autonomous target tracking of UAVs based on low-power neural network hardware

    Science.gov (United States)

    Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe

    2014-05-01

    Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.

  10. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  11. Records of Migration in the Exoplanet Configurations

    Science.gov (United States)

    Michtchenko, Tatiana A.; Rodriguez Colucci, A.; Tadeu Dos Santos, M.

    2013-05-01

    Abstract (2,250 Maximum Characters): When compared to our Solar System, many exoplanet systems exhibit quite unusual planet configurations; some of these are hot Jupiters, which orbit their central stars with periods of a few days, others are resonant systems composed of two or more planets with commensurable orbital periods. It has been suggested that these configurations can be the result of a migration processes originated by tidal interactions of the planets with disks and central stars. The process known as planet migration occurs due to dissipative forces which affect the planetary semi-major axes and cause the planets to move towards to, or away from, the central star. In this talk, we present possible signatures of planet migration in the distribution of the hot Jupiters and resonant exoplanet pairs. For this task, we develop a semi-analytical model to describe the evolution of the migrating planetary pair, based on the fundamental concepts of conservative and dissipative dynamics of the three-body problem. Our approach is based on an analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces needs to be invoked. We show that, under assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the stationary solutions of the conservative problem (Birkhoff, Dynamical systems, 1966). The ultimate convergence and the evolution of the system along one of these modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the roper frequencies of the system. We show that it is possible to reassemble the starting configurations and migration history of the systems on the basis of their final states, and consequently to constrain the parameters of the physical processes involved.

  12. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  13. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  14. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  15. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  16. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  17. INFLUENCE OF LABOUR MIGRATION PROCESSES ON THE QUALITY OF HUMAN CAPITAL OF THE RUSSIAN FEDERATION

    Directory of Open Access Journals (Sweden)

    Victor M. Volodin

    2017-01-01

    Full Text Available Abstract. Objectives The aim of the study is to identify the impact of labour migration processes on the quality of human capital. Methods For researching the methods and forms of migration capital’s impact on the formation of a new quality of human capital, a systematic approach is applied. In order to identify the imbalance in the distribution of labour resources among the regions of the Russian Federation and to assess migration processes, analytical and synthetic as well as statistical and comparative methods were applied. In order to help to visualise the identified economic and statistical dependencies, graphic images are provided. Results The essence of migration processes in the era of economic turbulence is revealed. The main share of labour migrants in the overall structure of migratory flows to the Russian Federation are labour migrants from the CIS countries; the main reasons for this situation are established. The factors of Russia’s high migration attractiveness are identified and the basic migratory process management tasks are defined. Among the main tasks of migration process management are: ensuring the national security of the Russian Federation; preservation, maintenance and improvement of comfort, well-being and quality of life of the Russian Federation population; solving the problems of stability and growth of the permanent population of the Russian Federation; creating conditions for the full satisfaction of the high-quality labour resource needs of the Russian economy, attracting labour migrants from highly developed countries; formation of conditions for the transition to sustainable development based on the introduction of scientific and technological progress and the creation of competitive industries. Conclusion The paper suggests three scenarios for the development of migration processes in Russia: inertial, realistic and optimistic. Improvements in the quality of migration capital can be achieved through the

  18. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  19. When do letter features migrate? A boundary condition for feature-integration theory.

    Science.gov (United States)

    Butler, B E; Mewhort, D J; Browse, R A

    1991-01-01

    Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.

  20. Architecture and development of the CDF hardware event builder

    International Nuclear Information System (INIS)

    Shaw, T.M.; Booth, A.W.; Bowden, M.

    1989-01-01

    A hardware Event Builder (EVB) has been developed for use at the Collider Detector experiment at Fermi National Accelerator (CDF). the Event builder presently consists of five FASTBUS modules and has the task of reading out the front end scanners, reformatting the data into YBOS bank structure, and transmitting the data to a Level 3 (L3) trigger system which is composed of multiple VME processing nodes. The Event Builder receives its instructions from a VAX based Buffer Manager (BFM) program via a Unibus Processor Interface (UPI). The Buffer Manager instructs the Event Builder to read out one of the four CDF front end buffers. The Event Builder then informs the Buffer Manager when the event has been formatted and then is instructed to push it up to the L3 trigger system. Once in the L3 system, a decision is made as to whether to write the event to tape

  1. Return migration: changing roles of men and women.

    Science.gov (United States)

    Sakka, D; Dikaiou, M; Kiosseoglou, G

    1999-01-01

    This article addresses changes in gender roles among returning migrant families. It focuses on Greek returnees from the Federal Republic of Germany and explores changes in task sharing behavior and gender role attitudes resulting from changes in the sociocultural environments. A group of return migrants was compared with a group of non-migrants, both living in villages in the District of Drama, Greece. Groups were interviewed to investigate the extent to which each spouse shared house tasks, as well as their attitudes towards sharing and gender role in the family. The t-test for independent samples was used to determine mean differences between the two groups. In addition to demographic variables, those concerning the "time lived abroad" and the "number of years in Greece" after return were inserted into a series of regression analyses. Findings showed that migrants' task sharing and gender role attitudes were influenced differently by the migration-repatriation experience and subsequent cultural alternation. Results also suggest that migrant couples either take on new patterns of behavior or maintain traditional ones only when these were congruent with the financial aims of the family or can be integrated into living conditions in Greece upon return. Furthermore, migrants seem to adopt a more "traditional" attitude than non-migrants toward the participation of women in family decision making. From the study, it is suggested that gender role change is an on-going process influenced by the migration-repatriation experience, as well the factors, which accompany movement between the two countries.

  2. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  3. PERANCANGAN APLIKASI SISTEM PAKAR DIAGNOSA KERUSAKAN HARDWARE KOMPUTER METODE FORWARD CHAINING

    Directory of Open Access Journals (Sweden)

    Ali Akbar Rismayadi

    2016-09-01

    Full Text Available Abstract Damage to computer hardware, not a big disaster, because not all damage to computer hardware can not be repaired, nearly all computer users, whether public or institutions often suffer various kinds of damage that occurred in the computer hardware it has, and the damage can be caused by various factors that are basically as the user does not know the cause of what makes the computer hardware used damaged. Therefore, it is necessary to build an application that can help users to mendiganosa damage to computer hardware. So that everyone can diagnose the type of hardware damage his computer. Development of expert system diagnosis of damage to computer hardware uses forward chaining method by promoting alisisis descriptive of various damage data obtained from several experts and other sources of literature to reach a conclusion on the diagnosis of damage. As well as using the waterfall model as a model system development, starting from the analysis stage to stage software needs support. This application is built using a programming language tools Eclipse ADT as well as SQLite as its database. diagnosis expert system damage computer hardware is expected to be used as a tool to help find the causes of damage to computer hardware independently without the help of a computer technician.

  4. The CEBAF accelerator control system: migrating from a TACL to an EPICS based system

    International Nuclear Information System (INIS)

    Watson, W.A. III; Barker, David; Bickley, Matthew; Gupta, Pratik; Johnson, R.P.

    1994-01-01

    CEBAF is in the process of migrating its accelerator and experimental hall control systems to one based upon EPICS, a control system toolkit developed by a collaboration among several DOE laboratories in the US. The new system design interfaces existing CAMAC hardware via a CAMAC serial highway to VME-based I/O controllers running the EPICS software; future additions and upgrades will for the most part go directly into VME. The decision to use EPICS followed difficulties in scaling the CEBAF-developed TACL system to full machine operation. TACL and EPICS share many design concepts, facilitating the conversion of software from one toolkit to the other. In particular, each supports graphical entry of algorithms built up from modular code, graphical displays with a display editor, and a client-server architecture with name-based I/O. During the migration, TACL and EPICS will interoperate through a socket-based I/O gateway. As part of a collaboration with other laboratories, CEBAF will add relational database support for system management and high level applications support. Initial experience with EPICS is presented, along with a plan for the full migration which is expected to be finished next year. ((orig.))

  5. REVEAL: Software Documentation and Platform Migration

    Science.gov (United States)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  6. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  7. Pre-Flight Tests with Astronauts, Flight and Ground Hardware, to Assure On-Orbit Success

    Science.gov (United States)

    Haddad Michael E.

    2010-01-01

    On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit or on the Lunar surface. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g/vacuum environment of space or low-g/vacuum environment on the Lunar/Mars Surface. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit/on Lunar/Mars surface before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit/Lunar/Mars surface operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.

  8. Speed challenge: a case for hardware implementation in soft-computing

    Science.gov (United States)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  9. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  10. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  11. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  12. Depth migration and de-migration for 3-D migration velocity analysis; Migration profondeur et demigration pour l'analyse de vitesse de migration 3D

    Energy Technology Data Exchange (ETDEWEB)

    Assouline, F.

    2001-07-01

    3-D seismic imaging of complex geologic structures requires the use of pre-stack imaging techniques, the post-stack ones being unsuitable in that case. Indeed, pre-stack depth migration is a technique which allows to image accurately complex structures provided that we have at our disposal a subsurface velocity model accurate enough. The determination of this velocity model is thus a key element for seismic imaging, and to this end, migration velocity analysis methods have met considerable interest. The SMART method is a specific migration velocity analysis method: the singularity of this method is that it does not rely on any restrictive assumptions on the complexity of the velocity model to determine. The SMART method uses a detour through the pre-stack depth migrated domain for extracting multi-offset kinematic information hardly accessible in the time domain. Once achieved the interpretation of the pre-stack depth migrated seismic data, a kinematic de-migration technique of the interpreted events enables to obtain a consistent kinematic database (i.e. reflection travel-times). Then, the inversion of these travel-times, by means of reflection tomography, allows the determination of an accurate velocity model. To be able to really image geologic structures for which the 3-D feature is predominant, we have studied the implementation of migration velocity analysis in 3-D in the context of the SMART method, and more generally, we have developed techniques allowing to overcome the intrinsic difficulties in the 3-D aspects of seismic imaging. Indeed, although formally the SMART method can be directly applied to the case of 3-D complex structures, the feasibility of its implementation requires to choose well the imaging domain. Once this choice done, it is also necessary to conceive a method allowing, via the associated de-migration, to obtain the reflection travel-times. We first consider the offset domain which constitutes, still today, the strategy most usually used

  13. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  14. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  15. Hippocampal Astrocytes in Migrating and Wintering Semipalmated Sandpiper Calidris pusilla.

    Science.gov (United States)

    Carvalho-Paulo, Dario; de Morais Magalhães, Nara G; de Almeida Miranda, Diego; Diniz, Daniel G; Henrique, Ediely P; Moraes, Isis A M; Pereira, Patrick D C; de Melo, Mauro A D; de Lima, Camila M; de Oliveira, Marcus A; Guerreiro-Diniz, Cristovam; Sherry, David F; Diniz, Cristovam W P

    2017-01-01

    Seasonal migratory birds return to the same breeding and wintering grounds year after year, and migratory long-distance shorebirds are good examples of this. These tasks require learning and long-term spatial memory abilities that are integrated into a navigational system for repeatedly locating breeding, wintering, and stopover sites. Previous investigations focused on the neurobiological basis of hippocampal plasticity and numerical estimates of hippocampal neurogenesis in birds but only a few studies investigated potential contributions of glial cells to hippocampal-dependent tasks related to migration. Here we hypothesized that the astrocytes of migrating and wintering birds may exhibit significant morphological and numerical differences connected to the long-distance flight. We used as a model the semipalmated sandpiper Calidris pusilla , that migrates from northern Canada and Alaska to South America. Before the transatlantic non-stop long-distance component of their flight, the birds make a stopover at the Bay of Fundy in Canada. To test our hypothesis, we estimated total numbers and compared the three-dimensional (3-D) morphological features of adult C. pusilla astrocytes captured in the Bay of Fundy ( n = 249 cells) with those from birds captured in the coastal region of Bragança, Brazil, during the wintering period ( n = 250 cells). Optical fractionator was used to estimate the number of astrocytes and for 3-D reconstructions we used hierarchical cluster analysis. Both morphological phenotypes showed reduced morphological complexity after the long-distance non-stop flight, but the reduction in complexity was much greater in Type I than in Type II astrocytes. Coherently, we also found a significant reduction in the total number of astrocytes after the transatlantic flight. Taken together these findings suggest that the long-distance non-stop flight altered significantly the astrocytes population and that morphologically distinct astrocytes may play

  16. Hippocampal Astrocytes in Migrating and Wintering Semipalmated Sandpiper Calidris pusilla

    Directory of Open Access Journals (Sweden)

    Dario Carvalho-Paulo

    2018-01-01

    Full Text Available Seasonal migratory birds return to the same breeding and wintering grounds year after year, and migratory long-distance shorebirds are good examples of this. These tasks require learning and long-term spatial memory abilities that are integrated into a navigational system for repeatedly locating breeding, wintering, and stopover sites. Previous investigations focused on the neurobiological basis of hippocampal plasticity and numerical estimates of hippocampal neurogenesis in birds but only a few studies investigated potential contributions of glial cells to hippocampal-dependent tasks related to migration. Here we hypothesized that the astrocytes of migrating and wintering birds may exhibit significant morphological and numerical differences connected to the long-distance flight. We used as a model the semipalmated sandpiper Calidris pusilla, that migrates from northern Canada and Alaska to South America. Before the transatlantic non-stop long-distance component of their flight, the birds make a stopover at the Bay of Fundy in Canada. To test our hypothesis, we estimated total numbers and compared the three-dimensional (3-D morphological features of adult C. pusilla astrocytes captured in the Bay of Fundy (n = 249 cells with those from birds captured in the coastal region of Bragança, Brazil, during the wintering period (n = 250 cells. Optical fractionator was used to estimate the number of astrocytes and for 3-D reconstructions we used hierarchical cluster analysis. Both morphological phenotypes showed reduced morphological complexity after the long-distance non-stop flight, but the reduction in complexity was much greater in Type I than in Type II astrocytes. Coherently, we also found a significant reduction in the total number of astrocytes after the transatlantic flight. Taken together these findings suggest that the long-distance non-stop flight altered significantly the astrocytes population and that morphologically distinct astrocytes

  17. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  18. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    OpenAIRE

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s mul...

  19. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  20. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    Science.gov (United States)

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  1. 2D to 3D conversion implemented in different hardware

    Science.gov (United States)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  2. A hardware architecture for real-time shadow removal in high-contrast video

    Science.gov (United States)

    Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel

    2017-09-01

    Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.

  3. Proof-Carrying Hardware: Concept and Prototype Tool Flow for Online Verification

    OpenAIRE

    Drzevitzky, Stephanie; Kastens, Uwe; Platzner, Marco

    2010-01-01

    Dynamically reconfigurable hardware combines hardware performance with software-like flexibility and finds increasing use in networked systems. The capability to load hardware modules at runtime provides these systems with an unparalleled degree of adaptivity but at the same time poses new challenges for security and safety. In this paper, we elaborate on the presentation of proof carrying hardware (PCH) as a novel approach to reconfigurable system security. PCH takes ...

  4. Virtual reality hardware and graphic display options for brain-machine interfaces.

    Science.gov (United States)

    Marathe, Amar R; Carey, Holle L; Taylor, Dawn M

    2008-01-15

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

  5. Task-based data-acquisition optimization for sparse image reconstruction systems

    Science.gov (United States)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  6. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  7. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  8. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  9. Development of a Methodology to Conduct Usability Evaluation for Hand Tools that May Reduce the Amount of Small Parts that are Dropped During Installation while Processing Space Flight Hardware

    Science.gov (United States)

    Miller, Darcy

    2000-01-01

    Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.

  10. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  11. Desenvolvimento de hardware reconfigurável de criptografia assimétrica

    Directory of Open Access Journals (Sweden)

    Otávio Souza Martins Gomes

    2015-01-01

    Full Text Available Este artigo apresenta o resultado parcial do desenvolvimento de uma interface de hardware reconfigurável para criptografia assimétrica que permite a troca segura de dados. Hardwares reconfiguráveis permitem o desenvolvimento deste tipo de dispositivo com segurança e flexibilidade e possibilitam a mudança de características no projeto com baixo custo e de forma rápida.Palavras-chave: Criptografia. Hardware. ElGamal. FPGA. Segurança. Development of an asymmetric cryptography reconfigurable harwadre ABSTRACTThis paper presents some conclusions and choices about the development of an asymmetric cryptography reconfigurable hardware interface to allow a safe data communication. Reconfigurable hardwares allows the development of this kind of device with safety and flexibility, and offer the possibility to change some features with low cost and in a fast way.Keywords: Cryptography. Hardware. ElGamal. FPGAs. Security.

  12. MRI monitoring of focused ultrasound sonications near metallic hardware.

    Science.gov (United States)

    Weber, Hans; Ghanouni, Pejman; Pascal-Tenorio, Aurea; Pauly, Kim Butts; Hargreaves, Brian A

    2018-07-01

    To explore the temperature-induced signal change in two-dimensional multi-spectral imaging (2DMSI) for fast thermometry near metallic hardware to enable MR-guided focused ultrasound surgery (MRgFUS) in patients with implanted metallic hardware. 2DMSI was optimized for temperature sensitivity and applied to monitor focus ultrasound surgery (FUS) sonications near metallic hardware in phantoms and ex vivo porcine muscle tissue. Further, we evaluated its temperature sensitivity for in vivo muscle in patients without metallic hardware. In addition, we performed a comparison of temperature sensitivity between 2DMSI and conventional proton-resonance-frequency-shift (PRFS) thermometry at different distances from metal devices and different signal-to-noise ratios (SNR). 2DMSI thermometry enabled visualization of short ultrasound sonications near metallic hardware. Calibration using in vivo muscle yielded a constant temperature sensitivity for temperatures below 43 °C. For an off-resonance coverage of ± 6 kHz, we achieved a temperature sensitivity of 1.45%/K, resulting in a minimum detectable temperature change of ∼2.5 K for an SNR of 100 with a temporal resolution of 6 s per frame. The proposed 2DMSI thermometry has the potential to allow MR-guided FUS treatments of patients with metallic hardware and therefore expand its reach to a larger patient population. Magn Reson Med 80:259-271, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Conclusion Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  14. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  15. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  16. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  17. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  18. Task-oriented quality assessment and adaptation in real-time mission critical video streaming applications

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2015-02-01

    In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class

  19. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  20. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  1. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  2. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  3. Task analysis: How far are we from usable PRA input

    International Nuclear Information System (INIS)

    Gertman, D.I.; Blackman, H.S.; Hinton, M.F.

    1984-01-01

    This chapter reviews data collected at the Idaho National Engineering Laboratory for three DOE-owned reactors (the Advanced Test Reactor, the Power Burst Facility, and the Loss of Fluids Test Reactor) in order to identify usable Probabilistic Risk Assessment (PRA) input. Task analytic procedures involve the determination of manning and skill levels as a means of determining communication requirements, in assessing job performance aids, and in assessing the accuracy and completeness of emergency and maintenance procedures. The least understood aspect in PRA and plant reliability models is the human factor. A number of examples from the data base are discussed and offered as a means of providing more meaningful data than has been available to PRA analysts in the past. It is concluded that the plant hardware-procedures-personnel interfaces are essential to safe and efficient plant operations and that task analysis is a reasonably sound way of achieving a qualitative method for identifying those tasks most strongly associated with task difficulty, severity of consequence, and error probability

  4. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  5. Mobile Thread Task Manager

    Science.gov (United States)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.

    2013-01-01

    The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.

  6. Event-driven processing for hardware-efficient neural spike sorting

    Science.gov (United States)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  7. Dispersal and migration

    Directory of Open Access Journals (Sweden)

    Schwarz, C.

    2004-06-01

    philopatric movement of geese using a classic multi–state design. Previous studies of philopaty often rely upon simple return rates —however, good mark–recapture studies do not need to assume equal detection probabilities in space and time. This is likely the most important contribution of multi–state modelling to the study of movement. As with many of these studies, the most pressing problem in the analysis is the explosion in the number of parameters and the need to choose parsimonious modelss to get good precision. Drake and Alisauska demonstrate that model choice still remains an art with a great deal of biological insight being very helpful in the task. There is still plenty of scope for novel methods to study migration. Traditionally, there has been a clear cut distinction between birds being labelled as “migrant” or “resident” on the basis of field observations and qualitative interpretations of patterns of ring–recoveries. However, there are intermediate species where only part of the population migrates (partial migrants or where different components of the population migrate to different extents (differential migrants. Siriwardena, Wernham and Baillie (Siriwardena et al., 2004 develop a novel method that produces a quantitative index of migratory tendency. The method uses distributions of ringing–to–recovery distances to classify individual species’ patterns of movement relative to those of other species. The areas between species’ cumulative distance distributions are used with multi–dimensional scaling to produce a similarity map among species. This map can be used to investigate the factors that affect the migratory strategies that species adopt, such as body size, territoriality and distribution, and in studies of their consequences for demographic parameters such as annual survival and the timing of breeding. The key assumption of the method is the similar recovery effort of species over space and time. It would be interesting to

  8. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  9. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  10. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  11. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  12. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  13. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  14. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  15. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  16. Brain inspired hardware architectures - Can they be used for particle physics ?

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    After their inception in the 1940s and several decades of moderate success, artificial neural networks have recently demonstrated impressive achievements in analysing big data volumes. Wide and deep network architectures can now be trained using high performance computing systems, graphics card clusters in particular. Despite their successes these state-of-the-art approaches suffer from very long training times and huge energy consumption, in particular during the training phase. The biological brain can perform similar and superior classification tasks in the space and time domains, but at the same time exhibits very low power consumption, rapid unsupervised learning capabilities and fault tolerance. In the talk the differences between classical neural networks and neural circuits in the brain will be presented. Recent hardware implementations of neuromorphic computing systems and their applications will be shown. Finally, some initial ideas to use accelerated neural architectures as trigger processors i...

  17. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  18. Technical Challenges and Lessons from the Migration of the GLOBE Data and Information System to Utilize Cloud Computing Service

    Science.gov (United States)

    Moses, J. F.; Memarsadeghi, N.; Overoye, D.; Littlefield, B.

    2016-12-01

    The Global Learning and Observation to Benefit the Environment (GLOBE) Data and Information System supports an international science and education program with capabilities to accept local environment observations, archive, display and visualize them along with global satellite observations. Since its inception twenty years ago, the Web and database system has been upgraded periodically to accommodate the changes in technology and the steady growth of GLOBE's education community and collection of observations. Recently, near the end-of-life of the system hardware, new commercial computer platform options were explored and a decision made to utilize Cloud services. Now the GLOBE DIS has been fully deployed and maintained using Amazon Cloud services for over two years now. This paper reviews the early risks, actual challenges, and some unexpected findings as a result of the GLOBE DIS migration. We describe the plans, cost drivers and estimates, highlight adjustments that were made and suggest improvements. We present the trade studies for provisioning, for load balancing, networks, processing , storage, as well as production, staging and backup systems. We outline the migration team's skills and required level of effort for transition, and resulting changes in the overall maintenance and operations activities. Examples include incremental adjustments to processing capacity and frequency of backups, and efforts previously expended on hardware maintenance that were refocused onto application-specific enhancements.

  19. Technical Challenges and Lessons from the Migration of the GLOBE Data and Information System to Utilize Cloud Computing Service

    Science.gov (United States)

    Moses, John F.; Memarsadeghi, Nargess; Overoye, David; Littlefield, Brain

    2017-01-01

    The Global Learning and Observation to Benefit the Environment (GLOBE) Data and Information System supports an international science and education program with capabilities to accept local environment observations, archive, display and visualize them along with global satellite observations. Since its inception twenty years ago, the Web and database system has been upgraded periodically to accommodate the changes in technology and the steady growth of GLOBEs education community and collection of observations. Recently, near the end-of-life of the system hardware, new commercial computer platform options were explored and a decision made to utilize Cloud services. Now the GLOBE DIS has been fully deployed and maintained using Amazon Cloud services for over two years now. This paper reviews the early risks, actual challenges, and some unexpected findings as a result of the GLOBE DIS migration. We describe the plans, cost drivers and estimates, highlight adjustments that were made and suggest improvements. We present the trade studies for provisioning, for load balancing, networks, processing, storage, as well as production, staging and backup systems. We outline the migration teams skills and required level of effort for transition, and resulting changes in the overall maintenance and operations activities. Examples include incremental adjustments to processing capacity and frequency of backups, and efforts previously expended on hardware maintenance that were refocused onto application-specific enhancements.

  20. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  1. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  2. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  3. Introduction to co-simulation of software and hardware in embedded processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Dreike, P.L.; McCoy, J.A.

    1996-09-01

    From the dawn of the first use of microprocessors and microcontrollers in embedded systems, the software has been blamed for products being late to market, This is due to software being developed after hardware is fabricated. During the past few years, the use of Hardware Description (or Design) Languages (HDLs) and digital simulation have advanced to a point where the concurrent development of software and hardware can be contemplated using simulation environments. This offers the potential of 50% or greater reductions in time-to-market for embedded systems. This paper is a tutorial on the technical issues that underlie software-hardware (swhw) co-simulation, and the current state of the art. We review the traditional sequential hardware-software design paradigm, and suggest a paradigm for concurrent design, which is supported by co-simulation of software and hardware. This is followed by sections on HDLs modeling and simulation;hardware assisted approaches to simulation; microprocessor modeling methods; brief descriptions of four commercial products for sw-hw co-simulation and a description of our own experiments to develop a co-simulation environment.

  4. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  5. Removal of symptomatic craniofacial titanium hardware following craniotomy: Case series and review

    Directory of Open Access Journals (Sweden)

    Sheri K. Palejwala

    2015-06-01

    Full Text Available Titanium craniofacial hardware has become commonplace for reconstruction and bone flap fixation following craniotomy. Complications of titanium hardware include palpability, visibility, infection, exposure, pain, and hardware malfunction, which can necessitate hardware removal. We describe three patients who underwent craniofacial reconstruction following craniotomies for trauma with post-operative courses complicated by medically intractable facial pain. All three patients subsequently underwent removal of the symptomatic craniofacial titanium hardware and experienced rapid resolution of their painful parasthesias. Symptomatic plates were found in the region of the frontozygomatic suture or MacCarty keyhole, or in close proximity with the supraorbital nerve. Titanium plates, though relatively safe and low profile, can cause local nerve irritation or neuropathy. Surgeons should be cognizant of the potential complications of titanium craniofacial hardware and locations that are at higher risk for becoming symptomatic necessitating a second surgery for removal.

  6. Evaluation of in vivo labelled dendritic cell migration in cancer patients.

    Science.gov (United States)

    Ridolfi, Ruggero; Riccobon, Angela; Galassi, Riccardo; Giorgetti, Gianluigi; Petrini, Massimiliano; Fiammenghi, Laura; Stefanelli, Monica; Ridolfi, Laura; Moretti, Andrea; Migliori, Giuseppe; Fiorentini, Giuseppe

    2004-07-30

    BACKGROUND: Dendritic Cell (DC) vaccination is a very promising therapeutic strategy in cancer patients. The immunizing ability of DC is critically influenced by their migration activity to lymphatic tissues, where they have the task of priming naïve T-cells. In the present study in vivo DC migration was investigated within the context of a clinical trial of antitumor vaccination. In particular, we compared the migration activity of mature Dendritic Cells (mDC) with that of immature Dendritic Cells (iDC) and also assessed intradermal versus subcutaneous administration. METHODS: DC were labelled with 99mTc-HMPAO or 111In-Oxine, and the presence of labelled DC in regional lymph nodes was evaluated at pre-set times up to a maximum of 72 h after inoculation. Determinations were carried out in 8 patients (7 melanoma and 1 renal cell carcinoma). RESULTS: It was verified that intradermal administration resulted in about a threefold higher migration to lymph nodes than subcutaneous administration, while mDC showed, on average, a six-to eightfold higher migration than iDC. The first DC were detected in lymph nodes 20-60 min after inoculation and the maximum concentration was reached after 48-72 h. CONCLUSIONS: These data obtained in vivo provide preliminary basic information on DC with respect to their antitumor immunization activity. Further research is needed to optimize the therapeutic potential of vaccination with DC.

  7. Compact vertical take-off and landing aerial vehicle for monitoring tasks in dense urban areas

    Directory of Open Access Journals (Sweden)

    Sergii FIRSOV

    2015-09-01

    Full Text Available Using of aerial vehicles with onboard sensory and broadcasting apparatus for monitoring a variety of objects and processes in inaccessible places of the city. A hardware and software package for the task solving is proposed in the article. Presented vehicle is a vertical take-off and landing airplane of tail-sitter type.

  8. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  9. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  10. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  11. No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent

    Science.gov (United States)

    Zaghloul, A. R. M.; Zaghloul, Y. A.

    2014-06-01

    We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)

  12. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  13. Family migration and employment: the importance of migration history and gender.

    Science.gov (United States)

    Bailey, A J; Cooke, T J

    1998-01-01

    "This article uses event history data to specify a model of employment returns to initial migration, onward migration, and return migration among newly married persons in the U.S. Husbands are more likely to be full-time employed than wives, and being a parent reduces the employment odds among married women. Employment returns to repeated migration differ by gender, with more husbands full-time employed after onward migration and more wives full-time employed after return migration events. We interpret these empirical findings in the context of family migration theory, segmented labor market theory, and gender-based responsibilities." Data are from the National Longitudinal Survey of Youth from 1979 to 1991. excerpt

  14. A preference for migration

    OpenAIRE

    Stark, Oded

    2007-01-01

    At least to some extent migration behavior is the outcome of a preference for migration. The pattern of migration as an outcome of a preference for migration depends on two key factors: imitation technology and migration feasibility. We show that these factors jointly determine the outcome of a preference for migration and we provide examples that illustrate how the prevalence and transmission of a migration-forming preference yield distinct migration patterns. In particular, the imitation of...

  15. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  16. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  17. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  18. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  19. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  20. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  1. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  2. Common and Specific Features of Migration Flows In Russia, CIS and Far Abroad

    Directory of Open Access Journals (Sweden)

    Tamara Vladimirovna Kuprina

    2015-06-01

    Full Text Available The article discusses the features of migration flows in Russia, CIS and abroad in terms of transnationality and translocality. Special attention is paid to migration flows from Asia, depending on the gender perception. During the analysis, the preferences of migrants of different age groups are determined. It shows their desire for a more independent financial and professional life, relationship with family responsibilities, plans for the future. The data obtained by the method of continuous selection have led to the following conclusions: migration is a multidimensional phenomenon that requires multidisciplinary research; the heterogeneity of migration flows to the socio-cultural and gender features must be taken into account; the problem of illegal migration requires special solutions; in connection with the emigration of highly skilled professionals and immigration of low-skilled personnel, a professional important indicator of migratory replacement and attracting compatriots from abroad are important. The data obtained can be used for ethnic and cultural planning in order to improve production activities that affect the quality of solving of production tasks and programming adaptation of migrants as the significant impact of intangible factors on the development of society has been noted by economists for a long time. Thus, the economic sustainability requires new thinking based on socio-cultural sustainability determined by such global phenomena as international migration flows.

  3. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  4. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  5. Two-dimensional systolic-array architecture for pixel-level vision tasks

    Science.gov (United States)

    Vijverberg, Julien A.; de With, Peter H. N.

    2010-05-01

    This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.

  6. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  7. Systematic development of industrial control systems using Software/Hardware Engineering

    NARCIS (Netherlands)

    Voeten, J.P.M.; van der Putten, P.H.A.; Stevens, M.P.J.; Milligan, P.; Corr, P.

    1997-01-01

    SHE (Software/Hardware Engineering) is a new object-oriented analysis, specification and design method for complex reactive hardware/software systems. SHE is based on the formal specification language POOSL and a design framework guiding analysis and design activities. This paper reports on the

  8. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  9. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  10. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  11. Visual Cluster Analysis for Computing Tasks at Workflow Management System of the ATLAS Experiment

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2018-01-01

    Hundreds of petabytes of experimental data in high energy and nuclear physics (HENP) have already been obtained by unique scientific facilities such as LHC, RHIC, KEK. As the accelerators are being modernized (energy and luminosity were increased), data volumes are rapidly growing and have reached the exabyte scale, that also affects the increasing the number of analysis and data processing tasks, that are competing continuously for computational resources. The increase of processing tasks causes an increase in the performance of the computing environment by the involvement of high-performance computing resources, and forming a heterogeneous distributed computing environment (hundreds of distributed computing centers). In addition, errors happen to occur while executing tasks for data analysis and processing, which are caused by software and hardware failures. With a distributed model of data processing and analysis, the optimization of data management and workload systems becomes a fundamental task, and the ...

  12. The Globalisation of migration

    OpenAIRE

    Milan Mesić

    2002-01-01

    The paper demonstrates that contemporary international migration is a constitutive part of the globalisation process. After defining the concepts of globalisation and the globalisation of migration, the author discusses six key themes, linking globalisation and international migration (“global cities”, the scale of migration; diversification of migration flows; globalisation of science and education; international migration and citizenship; emigrant communities and new identities). First, in ...

  13. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  14. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  15. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  16. Migration and Remittances : Recent Developments and Outlook - Transit Migration

    OpenAIRE

    World Bank Group

    2018-01-01

    This Migration and Development Brief reports global trends in migration and remittance flows, as well as developments related to the Global Compact on Migration (GCM), and the Sustainable Development Goal (SDG) indicators for volume of remittances as percentage of gross domestic product (GDP) (SDG indicator 17.3.2), reducing remittance costs (SDG indicator 10.c.1) and recruitment costs (SD...

  17. Evaluation of in vivo labelled dendritic cell migration in cancer patients

    Directory of Open Access Journals (Sweden)

    Ridolfi Laura

    2004-07-01

    Full Text Available Abstract Background Dendritic Cell (DC vaccination is a very promising therapeutic strategy in cancer patients. The immunizing ability of DC is critically influenced by their migration activity to lymphatic tissues, where they have the task of priming naïve T-cells. In the present study in vivo DC migration was investigated within the context of a clinical trial of antitumor vaccination. In particular, we compared the migration activity of mature Dendritic Cells (mDC with that of immature Dendritic Cells (iDC and also assessed intradermal versus subcutaneous administration. Methods DC were labelled with 99mTc-HMPAO or 111In-Oxine, and the presence of labelled DC in regional lymph nodes was evaluated at pre-set times up to a maximum of 72 h after inoculation. Determinations were carried out in 8 patients (7 melanoma and 1 renal cell carcinoma. Results It was verified that intradermal administration resulted in about a threefold higher migration to lymph nodes than subcutaneous administration, while mDC showed, on average, a six-to eightfold higher migration than iDC. The first DC were detected in lymph nodes 20–60 min after inoculation and the maximum concentration was reached after 48–72 h. Conclusions These data obtained in vivo provide preliminary basic information on DC with respect to their antitumor immunization activity. Further research is needed to optimize the therapeutic potential of vaccination with DC.

  18. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  19. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  20. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  1. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  2. The priority queue as an example of hardware/software codesign

    DEFF Research Database (Denmark)

    Høeg, Flemming; Mellergaard, Niels; Staunstrup, Jørgen

    1994-01-01

    The paper identifies a number of issues that are believed to be important for hardware/software codesign. The issues are illustrated by a small comprehensible example: a priority queue. Based on simulations of a real application, we suggest a combined hardware/software realization of the priority...

  3. All tied up: Tied staying and tied migration within the United States, 1997 to 2007

    Directory of Open Access Journals (Sweden)

    Thomas J. Cooke

    2013-10-01

    Full Text Available Background: The family migration literature presumes that women are cast into the role of the tied migrant. However, clearly identifying tied migrants is a difficult empirical task since it requires the identification of a counterfactual: Who moved but did not want to? Objective: This research develops a unique methodology to directly identify both tied migrants and tied stayers in order to investigate their frequency and determinants. Methods: Using data from the 1997 through 2009 U.S. Panel Study of Income Dynamics (PSID, propensity score matching is used to match married individuals with comparable single individuals to create counterfactual migration behaviors: who moved but would not have moved had they been single (tied migrants and who did not move but would have moved had they been single (tied stayers. Results: Tied migration is relatively rare and not limited just to women: rates of tied migration are similar for men and women. However, tied staying is both more common than tied migration and equally experienced by men and women. Consistent with the body of empirical evidence, an analysis of the determinants of tied migration and tied staying demonstrates that family migration decisions are imbued with gender. Conclusions: Additional research is warranted to validate the unique methodology developed in this paper and to confirm its results. One line of future research should be to examine the effects of tied staying, along with tied migration, on well-being, union stability, employment, and earnings.

  4. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  5. Migrating to Cloud-Native Architectures Using Microservices: An Experience Report

    OpenAIRE

    Balalaie, Armin; Heydarnoori, Abbas; Jamshidi, Pooyan

    2015-01-01

    Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to fully exploit the benefits of this environment, and adapting them to this environment is a non-trivial task. Microservices have appeared recently as novel architectural styles that are native to the cloud. These cloud-native architectures can facilita...

  6. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  7. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  8. Population, migration and urbanization.

    Science.gov (United States)

    1982-06-01

    Despite recent estimates that natural increase is becoming a more important component of urban growth than rural urban transfer (excess of inmigrants over outmigrants), the share of migration in the total population growth has been consistently increasing in both developed and developing countries. From a demographic perspective, the migration process involves 3 elements: an area of origin which the mover leaves and where he or she is considered an outmigrant; the destination or place of inmigration; and the period over which migration is measured. The 2 basic types of migration are internal and international. Internal migration consists of rural to urban migration, urban to urban migration, rural to rural migration, and urban to rural migration. Among these 4 types of migration various patterns or processes are followed. Migration may be direct when the migrant moves directly from the village to the city and stays there permanently. It can be circular migration, meaning that the migrant moves to the city when it is not planting season and returns to the village when he is needed on the farm. In stage migration the migrant makes a series of moves, each to a city closer to the largest or fastest growing city. Temporary migration may be 1 time or cyclical. The most dominant pattern of internal migration is rural urban. The contribution of migration to urbanization is evident. For example, the rapid urbanization and increase in urban growth from 1960-70 in the Republic of Korea can be attributed to net migration. In Asia the largest component of the population movement consists of individuals and groups moving from 1 rural location to another. Recently, because urban centers could no longer absorb the growing number of migrants from other places, there has been increased interest in the urban to rural population redistribution. This reverse migration also has come about due to slower rates of employment growth in the urban centers and improved economic opportunities

  9. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  10. Radon depth migration

    International Nuclear Information System (INIS)

    Hildebrand, S.T.; Carroll, R.J.

    1993-01-01

    A depth migration method is presented that used Radon-transformed common-source seismograms as input. It is shown that the Radon depth migration method can be extended to spatially varying velocity depth models by using asymptotic ray theory (ART) to construct wavefield continuation operators. These operators downward continue an incident receiver-array plane wave and an assumed point-source wavefield into the subsurface. The migration velocity model is constrain to have longer characteristic wavelengths than the dominant source wavelength such that the ART approximations for the continuation operators are valid. This method is used successfully to migrate two synthetic data examples: (1) a point diffractor, and (2) a dipping layer and syncline interface model. It is shown that the Radon migration method has a computational advantage over the standard Kirchhoff migration method in that fewer rays are computed in a main memory implementation

  11. Theoretical foundations of international migration process studies: analysis of key migration theories development

    Directory of Open Access Journals (Sweden)

    Shymanska K.V.

    2017-03-01

    Full Text Available The need for transformation of Ukraine's migration policy based on globalized world development trends and in response to the challenges of European integration transformations causes the need of researching the theoretical and methodological basis of migration studies, and the regulations of existing theories of international migration. The bibliometric analysis of scientific publications on international migration in cites indexes found that the recent researches on these problems acquire interdisciplinary character. It necessitates the transformation of migration study approaches basing on economic, social, institutional theories and concepts synthesis. The article is devoted to the study of theoretical regulations of existing international migration theories in the context of the evolution of scientists’ views on this phenomenon. The author found that the existing theories of international migration should be divided into three categories (microeconomic, macroeconomic, globalizational that contributes to their understanding in the context of implementation possibilities in migrational public administration practice. It allows to determine the theories which should be used for Ukrainian state migration policy constructing and eliminating or reducing the external migration negative effects.

  12. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  13. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  14. Migration and risk: net migration in marginal ecosystems and hazardous areas

    International Nuclear Information System (INIS)

    De Sherbinin, Alex; Levy, Marc; Adamo, Susana; MacManus, Kytt; Yetman, Greg; Mara, Valentina; Razafindrazay, Liana; Aichele, Cody; Pistolesi, Linda; Goodrich, Benjamin; Srebotnjak, Tanja

    2012-01-01

    The potential for altered ecosystems and extreme weather events in the context of climate change has raised questions concerning the role that migration plays in either increasing or reducing risks to society. Using modeled data on net migration over three decades from 1970 to 2000, we identify sensitive ecosystems and regions at high risk of climate hazards that have seen high levels of net in-migration and out-migration over the time period. This paper provides a literature review on migration related to ecosystems, briefly describes the methodology used to develop the estimates of net migration, then uses those data to describe the patterns of net migration for various ecosystems and high risk regions. The study finds that negative net migration generally occurs over large areas, reflecting its largely rural character, whereas areas of positive net migration are typically smaller, reflecting its largely urban character. The countries with largest population such as China and India tend to drive global results for all the ecosystems found in those countries. Results suggest that from 1970 to 2000, migrants in developing countries have tended to move out of marginal dryland and mountain ecosystems and out of drought-prone areas, and have moved towards coastal ecosystems and areas that are prone to floods and cyclones. For North America results are reversed for dryland and mountain ecosystems, which saw large net influxes of population in the period of record. Uncertainties and potential sources of error in these estimates are addressed. (letter)

  15. Neuronal Migration and Neuronal Migration Disorder in Cerebral Cortex

    OpenAIRE

    SUN, Xue-Zhi; TAKAHASHI, Sentaro; GUI, Chun; ZHANG, Rui; KOGA, Kazuo; NOUYE, Minoru; MURATA, Yoshiharu

    2002-01-01

    Neuronal cell migration is one of the most significant features during cortical development. After final mitosis, neurons migrate from the ventricular zone into the cortical plate, and then establish neuronal lamina and settle onto the outermost layer, forming an "inside-out" gradient of maturation. Neuronal migration is guided by radial glial fibers and also needs proper receptors, ligands, and other unknown extracellular factors, requests local signaling (e.g. some emitted by the Cajal-Retz...

  16. Japanese Migration and the Americas: An Introduction to the Study of Migration.

    Science.gov (United States)

    Mukai, Gary; Brunette, Rachel

    This curriculum module introduces students to the study of migration, including a brief overview of some categories of migration and reasons why people migrate. As a case study, the module uses the Japanese migration experience in the United States, Peru, Brazil, Canada, Mexico, Argentina, Bolivia, and Paraguay. The module introduces students to…

  17. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  18. OER Approach for Specific Student Groups in Hardware-Based Courses

    Science.gov (United States)

    Ackovska, Nevena; Ristov, Sasko

    2014-01-01

    Hardware-based courses in computer science studies require much effort from both students and teachers. The most important part of students' learning is attending in person and actively working on laboratory exercises on hardware equipment. This paper deals with a specific group of students, those who are marginalized by not being able to…

  19. 49 CFR 238.105 - Train electronic hardware and software safety.

    Science.gov (United States)

    2010-10-01

    ... and software system safety as part of the pre-revenue service testing of the equipment. (d)(1... safely by initiating a full service brake application in the event of a hardware or software failure that... 49 Transportation 4 2010-10-01 2010-10-01 false Train electronic hardware and software safety. 238...

  20. [Internal migration studies].

    Science.gov (United States)

    Stpiczynski, T

    1986-10-01

    Recent research on internal migration in Poland is reviewed. The basic sources of data, consisting of censuses or surveys, are first described. The author discusses the relationship between migration studies and other sectors of the national economy, and particularly the relationship between migration and income.

  1. The apparent and effective chloride migration coefficients obtained in migration tests

    NARCIS (Netherlands)

    Spiesz, P.R.; Brouwers, H.J.H.

    2013-01-01

    The apparent (Dapp) and effective (Deff) migration coefficients obtained in chloride migration tests are investigated in this study. The presented Dapp profiles in concrete show that the apparent migration coefficient is strongly concentration-dependent. As demonstrated, the binding of chlorides is

  2. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  3. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  4. Security challenges and opportunities in adaptive and reconfigurable hardware

    OpenAIRE

    Costan, Victor Marius; Devadas, Srinivas

    2011-01-01

    We present a novel approach to building hardware support for providing strong security guarantees for computations running in the cloud (shared hardware in massive data centers), while maintaining the high performance and low cost that make cloud computing attractive in the first place. We propose augmenting regular cloud servers with a Trusted Computation Base (TCB) that can securely perform high-performance computations. Our TCB achieves cost savings by spreading functionality across two pa...

  5. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  6. System-level protection and hardware Trojan detection using weighted voting.

    Science.gov (United States)

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  7. System-level protection and hardware Trojan detection using weighted voting

    Directory of Open Access Journals (Sweden)

    Hany A.M. Amin

    2014-07-01

    Full Text Available The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  8. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    Science.gov (United States)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  9. Katrina in Historical Context: Environment and Migration in the U.S.

    Science.gov (United States)

    Gutmann, Myron P; Field, Vincenzo

    2010-01-01

    The massive publicity surrounding the exodus of residents from New Orleans spurred by Hurricane Katrina has encouraged interest in the ways that past migration in the U.S. has been shaped by environmental factors. So has Timothy Egan's exciting book, The Worst Hard Time: The Untold Story of those who survived the Great American Dust Bowl. This paper places those dramatic stories into a much less exciting context, demonstrating that the kinds of environmental factors exemplified by Katrina and the Dust Bowl are dwarfed in importance and frequency by the other ways that environment has both impeded and assisted the forces of migration. We accomplish this goal by enumerating four types of environmental influence on migration in the U.S.: 1) environmental calamities, including floods, hurricanes, earthquakes, and tornadoes, 2) environmental hardships and their obverse, short-term environmental benefits, including both drought and short periods of favorable weather, 3) environmental amenities, including warmth, sun, and proximity to water or mountains, and 4) environmental barriers and their management, including heat, air conditioning, flood control, drainage, and irrigation. In U.S. history, all four of these have driven migration flows in one direction or another. Placing Katrina into this historical context is an important task, both because the environmental calamities of which Katrina is an example are relatively rare and have not had a wide impact, and because focusing on them defers interest from the other kinds of environmental impacts, whose effect on migration may have been stronger and more persistent, though less dramatic.

  10. Migration and Singapore: implications for the Asia Pacific.

    Science.gov (United States)

    Low, L

    1994-01-01

    Regarding immigration and emigration issues, there is a need for greater and more detailed data collection, an emphasis on data on illegal as well as legal migration, an examination of the impact of direct foreign investment on job creation and new labor market entrants, and a strengthening of international conventions for protection of foreign workers. The Pacific Economic Cooperation Council (PECC), Human Resource Development Task Force, is the source for projections of labor demand and supply for 18 PECC countries in 1993 and 1994. These projections indicate labor shortages in 1993 in Canada, Taiwan, Hong Kong, Malaysia, Singapore, and Thailand. The greatest labor supplier will be China. Japan and Korea are expected to have economic downturns, which will increase excess labor. The extent of excess labor is estimated to be 8.01 million in 1993 and 12.43 in 1994. The nature of the calculations could potentially exaggerate existing demand. A variety of theories are used to determine the direction and flow of migration, capital, goods and services, and technology. Estimates of migration flows indicate an increase to 100 million migrants in 1992, or 1.8% of world population (35 million in Sub-Saharan Africa, 15 million in Asia and the Middle East, and almost 13 million in Western Europe and North America). The value of remittances is estimated at $66 million (US dollars), which is slightly less than the value of oil trade and exceeds the $46 million in foreign aid. It is hypothesized that wider spatial and income inequalities with expanding globalization will increase migration flows. The case of Singapore illustrates how manipulation of the labor market reduces potential problems. Immigration policy historically encouraged migration of skilled and professional workers. In 1990 foreign workers in Singapore constituted 12% of the labor force. Since 1982 a monthly foreign worker levy has been imposed. The levy is increased when needed in order to slow demand. In 1992

  11. Reinventing US Internal Migration Studies in the Age of International Migration.

    Science.gov (United States)

    Ellis, Mark

    2012-03-01

    I argue that researchers have sidelined attention to issues raised by US internal migration as they shifted focus to the questions posed by the post-1960s rise in US immigration. In this paper, I offer some reasons about why immigration has garnered more attention and why there needs to be greater consideration of US internal migration and its significant and myriad social, economic, political, and cultural impacts. I offer three ideas for motivating more research into US internal geographic mobility that would foreground its empirical and conceptual connections to international migration. First, there should be more work on linked migration systems investigating the connections between internal and international flows. Second, the questions asked about immigrant social, cultural, and economic impacts and adaptations in host societies should also be asked about internal migrants. Third, and more generally, migration researchers should jettison the assumption that the national scale is the pre-eminent delimiter of migration types and processes. Some groups can move easily across borders; others are constrained in their moves within countries. These subnational scales and constraints will become more visible if migration research decentres the national from its theory and empirics.

  12. MANAGING MIGRATION: TURKISH PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    İhsan GÜLAY

    2018-04-01

    Full Text Available Conducting migration studies is of vital importance to Turkey, a country which has been experiencing migration throughout history due to its “open doors policy”. The objective of this study is to evaluate the strategic management of migration in Turkey in order to deal with the issue of migration. The main focus of the study is Syrian migrants who sought refuge in Turkey due to the civil war that broke out in their country in April 2011. This study demonstrates the policies and processes followed by Turkey for Syrian migration flow in terms of the social acceptance and harmonisation of the migrants within a democratic environment. The study addresses some statistical facts and issues related to Syrian migration as it has become an integral part of daily life in Turkey. The study also reviews how human rights are protected in the migration process. The study will provide insights for developing sound strategic management policies for the migration issue.

  13. Test Program for Stirling Radioisotope Generator Hardware at NASA Glenn Research Center

    Science.gov (United States)

    Lewandowski, Edward J.; Bolotin, Gary S.; Oriti, Salvatore M.

    2015-01-01

    Stirling-based energy conversion technology has demonstrated the potential of high efficiency and low mass power systems for future space missions. This capability is beneficial, if not essential, to making certain deep space missions possible. Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-W radioisotope power system. A variety of flight-like hardware, including Stirling convertors, controllers, and housings, was designed and built under the ASRG flight development project. To support future Stirling-based power system development NASA has proposals that, if funded, will allow this hardware to go on test at the NASA Glenn Research Center. While future flight hardware may not be identical to the hardware developed under the ASRG flight development project, many components will likely be similar, and system architectures may have heritage to ASRG. Thus, the importance of testing the ASRG hardware to the development of future Stirling-based power systems cannot be understated. This proposed testing will include performance testing, extended operation to establish an extensive reliability database, and characterization testing to quantify subsystem and system performance and better understand system interfaces. This paper details this proposed test program for Stirling radioisotope generator hardware at NASA Glenn. It explains the rationale behind the proposed tests and how these tests will meet the stated objectives.

  14. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  15. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  16. Migration chemistry

    International Nuclear Information System (INIS)

    Carlsen, L.

    1992-05-01

    Migration chemistry, the influence of chemical -, biochemical - and physico-chemical reactions on the migration behaviour of pollutants in the environment, is an interplay between the actual natur of the pollutant and the characteristics of the environment, such as pH, redox conditions and organic matter content. The wide selection of possible pollutants in combination with varying geological media, as well as the operation of different chemical -, biochemical - and physico-chemical reactions compleactes the prediction of the influence of these processes on the mobility of pollutants. The report summarizes a wide range of potential pollutants in the terrestrial environment as well as a variety of chemical -, biochemical - and physico-chemical reactions, which can be expected to influence the migration behaviour, comprising diffusion, dispersion, convection, sorption/desorption, precipitation/dissolution, transformations/degradations, biochemical reactions and complex formation. The latter comprises the complexation of metal ions as well as non-polar organics to naturally occurring organic macromolecules. The influence of the single types of processes on the migration process is elucidated based on theoretical studies. The influence of chemical -, biochemical - and physico-chemical reactions on the migration behaviour is unambiguous, as the processes apparently control the transport of pollutants in the terrestrial environment. As the simple, conventional K D concept breaks down, it is suggested that the migration process should be described in terms of the alternative concepts chemical dispersion, average-elution-time and effective retention. (AB) (134 refs.)

  17. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  18. Development of Soft-Hardware Platform for Training System Design of Electrotechnical Complexes and Electric Drives

    Directory of Open Access Journals (Sweden)

    Koltunova Ekaterina A.

    2017-01-01

    Full Text Available The article presents the results of the development of software and hardware platform as the equipment for the training of children and youth work skills with robotics, allowing in the future to apply this knowledge in practice, implementing automation system for home use. We consider the problems of existing solutions. The main difference is the integration of the proposed fees and extensions into a single set by connecting the connectors and the ability to connect third-party components from different manufacturers, without limiting users. As well as a simplified method using a visual object-oriented programming allows you to immediately engage in the work. Prepared lessons and tasks in the game style simplifies the information and allows you to understand how you can apply one or another technical solution.

  19. Open source hardware and software platform for robotics and artificial intelligence applications

    Science.gov (United States)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  20. Open source hardware and software platform for robotics and artificial intelligence applications

    International Nuclear Information System (INIS)

    Liang, S Ng; Tan, K O; Clement, T H Lai; Ng, S K; Mohammed, A H Ali; Mailah, Musa; Yussof, Wan Azhar; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-01-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots. (paper)

  1. X-Window for process control in a mixed hardware environment

    International Nuclear Information System (INIS)

    Clausen, M.; Rehlich, K.

    1992-01-01

    X-Window is a common standard for display purposes on the current workstations. The possibility to create more than one window on a single screen enables the operators to gain more information about the process. Multiple windows from different control systems using mixed hardware is one of the problems this paper will describe. The experience shows that X-Window is a standard per definition, but not in any case. But it is an excellent tool to separate data-acquisition and display from each other over long distances using different types of hardware and software for communications and display. Our experience with X-Window displays for the cryogenic control system and the vacuum control system at HERA on DEC and SUN hardware will be described. (author)

  2. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  3. Current trends in hardware and software for brain-computer interfaces (BCIs).

    Science.gov (United States)

    Brunner, P; Bianchi, L; Guger, C; Cincotti, F; Schalk, G

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  4. Current trends in hardware and software for brain-computer interfaces (BCIs)

    Science.gov (United States)

    Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  5. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  6. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  7. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  8. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    Science.gov (United States)

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2017-06-01

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  9. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  10. Countering inbreeding with migration 1. Migration from unrelated ...

    African Journals Online (AJOL)

    Ret:ieved 6 Octoher 1991; ut:cepted I8 Mur- 1995. The eff'ect of migration on inbreeding is moclelled fbr small populations with immigrants from a large unrelated population. Different migration rates and numbers fbr the two sexes are assumed, and a general recursion equation for inbreeding progress derived, which can ...

  11. Utilizing IXP1200 hardware and software for packet filtering

    OpenAIRE

    Lindholm, Jeffery L.

    2004-01-01

    As network processors have advanced in speed and efficiency they have become more and more complex in both hardware and software configurations. Intel's IXP1200 is one of these new network processors that has been given to different universities worldwide to conduct research on. The goal of this thesis is to take the first step in starting that research by providing a stable system that can provide a reliable platform for further research. This thesis introduces the fundamental hardware of In...

  12. The hardware control system for WEAVE at the William Herschel telescope

    NARCIS (Netherlands)

    Delgado Hernandez, Jose M.; Rodríguez-Ramos, Luis F.; Cano Infantes, Diego; Martin, Carlos; Bevil, Craige; Picó, Sergio; Dee, Kevin M.; Abrams, Don Carlos; Lewis, Ian J.; Pragt, Johan; Stuik, Remko; Tromp, Niels; Dalton, Gavin; L. Aguerri, J. Alfonso; Bonifacio, Piercarlo; Middleton, Kevin F.; Trager, Scott C.

    This work describes the hardware control system of the Prime Focus Corrector (PFC) and the Spectrograph, two of the main parts of WEAVE, a multi-object fiber spectrograph for the WHT Telescope. The PFC and Spectrograph control system hardware is based on the Allen Bradley's Programmable Automation

  13. The Globalisation of migration

    Directory of Open Access Journals (Sweden)

    Milan Mesić

    2002-04-01

    Full Text Available The paper demonstrates that contemporary international migration is a constitutive part of the globalisation process. After defining the concepts of globalisation and the globalisation of migration, the author discusses six key themes, linking globalisation and international migration (“global cities”, the scale of migration; diversification of migration flows; globalisation of science and education; international migration and citizenship; emigrant communities and new identities. First, in accordance with Saskia Sassen’s analysis, the author rejects the wide-spread notion that unqualified migrants have lost an (important role in »global cities«, i.e. in the centres of the new (global economy. Namely, the post-modern service sector cannot function without the support of a wide range of auxiliary unqualified workers. Second, a critical comparison with traditional overseas mass migration to the USA at the turn of the 19th and 20th centuries indicates that present international migration is, perhaps, less extensive – however it is important to take into consideration various limitations that previously did not exist, and thus the present migration potential is in really greater. Third, globalisation is more evident in a diversification of the forms of migration: the source area of migrants to the New World and Europe has expanded to include new regions in the world; new immigration areas have arisen (the Middle East, new industrial countries of the Far East, South Europe; intra-regional migration has intensified. Forth, globalisation is linked to an increased migration of experts and the pessimistic notion of a brain drain has been replaced by the optimistic idea of a brain gain. Fifth, contemporary international migration has been associated with a crisis of the national model of citizenship. Sixth, the interlinking of (migrant cultural communities regardless of distance and the physical proximity of cultural centres (the

  14. A Message-Passing Hardware/Software Cosimulation Environment for Reconfigurable Computing Systems

    Directory of Open Access Journals (Sweden)

    Manuel Saldaña

    2009-01-01

    Full Text Available High-performance reconfigurable computers (HPRCs provide a mix of standard processors and FPGAs to collectively accelerate applications. This introduces new design challenges, such as the need for portable programming models across HPRCs and system-level verification tools. To address the need for cosimulating a complete heterogeneous application using both software and hardware in an HPRC, we have created a tool called the Message-passing Simulation Framework (MSF. We have used it to simulate and develop an interface enabling an MPI-based approach to exchange data between X86 processors and hardware engines inside FPGAs. The MSF can also be used as an application development tool that enables multiple FPGAs in simulation to exchange messages amongst themselves and with X86 processors. As an example, we simulate a LINPACK benchmark hardware core using an Intel-FSB-Xilinx-FPGA platform to quickly prototype the hardware, to test the communications. and to verify the benchmark results.

  15. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    Science.gov (United States)

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  16. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  17. Skylab task and work performance /Experiment M-151 - Time and motion study/

    Science.gov (United States)

    Kubis, J. F.; Mclaughlin, E. J.

    1975-01-01

    The primary objective of Experiment M151 was to study the inflight adaptation of Skylab crewmen to a variety of task situations involving different types of activity. A parallel objective was to examine astronaut inflight performance for any behavioral stress effects associated with the working and living conditions of the Skylab environment. Training data provided the basis for comparison of preflight and inflight performance. Efficiency was evaluated through the adaptation function, namely, the relation of performance time over task trials. The results indicate that the initial changeover from preflight to inflight was accompanied by a substantial increase in performance time for most work and task activities. Equally important was the finding that crewmen adjusted rapidly to the weightless environment and became proficient in developing techniques with which to optimize task performance. By the end of the second inflight trial, most of the activities were performed almost as efficiently as on the last preflight trial. The analysis demonstrated the sensitivity of the adaptation function to differences in task and hardware configurations. The function was found to be more regular and less variable inflight than preflight. Translation and control of masses were accomplished easily and efficiently through the rapid development of the arms and legs as subtle guidance and restraint systems.

  18. Migration and AIDS.

    Science.gov (United States)

    1998-01-01

    This article presents the perspectives of UNAIDS and the International Organization for Migration (IOM) on migration and HIV/AIDS. It identifies research and action priorities and policy issues, and describes the current situation in major regions of the world. Migration is a process. Movement is enhanced by air transport, rising international trade, deregulation of trade practices, and opening of borders. Movements are restricted by laws and statutes. Denial to freely circulate and obtain asylum is associated with vulnerability to HIV infections. A UNAIDS policy paper in 1997 and IOM policy guidelines in 1988 affirm that refugees and asylum seekers should not be targeted for special measures due to HIV/AIDS. There is an urgent need to provide primary health services for migrants, voluntary counseling and testing, and more favorable conditions. Research is needed on the role of migration in the spread of HIV, the extent of migration, availability of health services, and options for HIV prevention. Research must be action-oriented and focused on vulnerability to HIV and risk taking behavior. There is substantial mobility in West and Central Africa, economic migration in South Africa, and nonvoluntary migration in Angola. Sex workers in southeast Asia contribute to the spread. The breakup of the USSR led to population shifts. Migrants in Central America and Mexico move north to the US where HIV prevalence is higher.

  19. The role of migration-specific and migration-relevant policies in migrant decision-making in transit

    NARCIS (Netherlands)

    Kuschminder - de Guerre, Katie; Koser, Khalid

    2017-01-01

    This paper examines the role of migration-specific and migration-relevant policies in migrant decision-making factors for onwards migration or stay in Greece and Turkey. In this paper we distinguish migration-specific policies from migration-relevant policies in transit and destination countries,

  20. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    Science.gov (United States)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  1. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system

    Directory of Open Access Journals (Sweden)

    Daniel Brüderle

    2009-06-01

    Full Text Available Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  2. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  3. Integrated conception of hardware/software mixed systems used in nuclear instrumentation

    International Nuclear Information System (INIS)

    Dias, Ailton F.; Sorel, Yves; Akil, Mohamed

    2002-01-01

    Hardware/software codesign carries out the design of systems composed by a hardware portion, with specific components, and a software portion, with microprocessor based architecture. This paper describes the Algorithm Architecture Adequation (AAA) design methodology - originally oriented to programmable multicomponent architectures, its extension to reconfigurable circuits and its application to design and development of nuclear instrumentation systems composed by programmable and configurable circuits. AAA methodology uses an unified model to describe algorithm, architecture and implementation, based on graph theory. The great advantage of AAA methodology is the utilization of a same model from the specification to the implementation of hardware/software systems, reducing the complexity and design time. (author)

  4. Partial diel migration: A facultative migration underpinned by long-term inter-individual variation.

    Science.gov (United States)

    Harrison, Philip M; Gutowsky, Lee F G; Martins, Eduardo G; Patterson, David A; Cooke, Steven J; Power, Michael

    2017-09-01

    The variations in migration that comprise partial diel migrations, putatively occur entirely as a consequence of behavioural flexibility. However, seasonal partial migrations are increasingly recognised to be mediated by a combination of reversible plasticity in response to environmental variation and individual variation due to genetic and environmental effects. Here, we test the hypothesis that while partial diel migration heterogeneity occurs primarily due to short-term within-individual flexibility in behaviour, long-term individual differences in migratory behaviour also underpin this migration variation. Specifically, we use a hierarchical behavioural reaction norm approach to partition within- and among-individual variation in depth use and diel plasticity in depth use, across short- and long-term time-scales, in a group of 47 burbot (Lota lota) tagged with depth-sensing acoustic telemetry transmitters. We found that within-individual variation at the among-dates-within-seasons and among-seasons scale, explained the dominant proportion of phenotypic variation. However, individuals also repeatedly differed in their expression of migration behaviour over the 2 year study duration. These results reveal that diel migration variation occurs primarily due to short-term within-individual flexibility in depth use and diel migration behaviour. However, repeatable individual differences also played a key role in mediating partial diel migration. These findings represent a significant advancement of our understanding of the mechanisms generating the important, yet poorly understood phenomena of partial diel migration. Moreover, given the pervasive occurrence of diel migrations across aquatic taxa, these findings indicate that individual differences have an important, yet previously unacknowledged role in structuring the temporal and vertical dynamics of aquatic ecosystems. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  5. Combining environment-driven adaptation and task-driven optimisation in evolutionary robotics.

    Science.gov (United States)

    Haasdijk, Evert; Bredeche, Nicolas; Eiben, A E

    2014-01-01

    Embodied evolutionary robotics is a sub-field of evolutionary robotics that employs evolutionary algorithms on the robotic hardware itself, during the operational period, i.e., in an on-line fashion. This enables robotic systems that continuously adapt, and are therefore capable of (re-)adjusting themselves to previously unknown or dynamically changing conditions autonomously, without human oversight. This paper addresses one of the major challenges that such systems face, viz. that the robots must satisfy two sets of requirements. Firstly, they must continue to operate reliably in their environment (viability), and secondly they must competently perform user-specified tasks (usefulness). The solution we propose exploits the fact that evolutionary methods have two basic selection mechanisms-survivor selection and parent selection. This allows evolution to tackle the two sets of requirements separately: survivor selection is driven by the environment and parent selection is based on task-performance. This idea is elaborated in the Multi-Objective aNd open-Ended Evolution (monee) framework, which we experimentally validate. Experiments with robotic swarms of 100 simulated e-pucks show that monee does indeed promote task-driven behaviour without compromising environmental adaptation. We also investigate an extension of the parent selection process with a 'market mechanism' that can ensure equitable distribution of effort over multiple tasks, a particularly pressing issue if the environment promotes specialisation in single tasks.

  6. Combining environment-driven adaptation and task-driven optimisation in evolutionary robotics.

    Directory of Open Access Journals (Sweden)

    Evert Haasdijk

    Full Text Available Embodied evolutionary robotics is a sub-field of evolutionary robotics that employs evolutionary algorithms on the robotic hardware itself, during the operational period, i.e., in an on-line fashion. This enables robotic systems that continuously adapt, and are therefore capable of (re-adjusting themselves to previously unknown or dynamically changing conditions autonomously, without human oversight. This paper addresses one of the major challenges that such systems face, viz. that the robots must satisfy two sets of requirements. Firstly, they must continue to operate reliably in their environment (viability, and secondly they must competently perform user-specified tasks (usefulness. The solution we propose exploits the fact that evolutionary methods have two basic selection mechanisms-survivor selection and parent selection. This allows evolution to tackle the two sets of requirements separately: survivor selection is driven by the environment and parent selection is based on task-performance. This idea is elaborated in the Multi-Objective aNd open-Ended Evolution (monee framework, which we experimentally validate. Experiments with robotic swarms of 100 simulated e-pucks show that monee does indeed promote task-driven behaviour without compromising environmental adaptation. We also investigate an extension of the parent selection process with a 'market mechanism' that can ensure equitable distribution of effort over multiple tasks, a particularly pressing issue if the environment promotes specialisation in single tasks.

  7. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  8. Influences of environmental cues, migration history and habitat familiarity on partial migration

    DEFF Research Database (Denmark)

    Skov, Christian; Aarestrup, Kim; Baktoft, Henrik

    2010-01-01

    The factors that drive partial migration in organisms are not fully understood. Roach (Rutilus rutilus), a freshwater fish, engage in partial migration where parts of populations switch between summer habitats in lakes and winter habitats in connected streams. To test if the partial migration trait...

  9. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  10. A scheme to promote the world's economic development with migration.

    Science.gov (United States)

    Simon, J L

    1982-01-01

    Migration's social value is generally assessed from the perspective of the receiving country. At this time the policies of rich countries allow fewer immigrants to enter than would enter under a laissez faire policy, but this may be a suboptimal policy for the world population as a whole. Those who now migrate from poor to rich countries and those who are prevented from doing so by restrictive laws certainly believe that their economic situation would be improved by such migration. A scheme for improving the worldwide welfare while making appropriate allowance for the preferences of those in the rich countries is discussed. The scheme deals with the migration of poorly schooled and semiskilled people and not the well educated. As the rich countries will not voluntarily open their borders to such immigration, a change in the international system is suggested, giving some power of taxation to an international body. This body would then pay the rich countries to take in immigrants by holding an auction among the rich countries for the immigration contracts. The scheme is analyzed, and it is argued that in the long run this approach is not as politically impossible as it initially seems. The discussion reviews the problem, the system, and the possibilities (the power of migration, the mechanisms in migration's power to raise productivity, earning patterns of immigrant cohorts, and additional possible tests) and the relative effectiveness of migration verses education in the less developed countries. It seems reasonable to send the migrants to countries that want them or can be made to want them, and an auction is a device to determine who wants something relative to someone else. A Supranational Planning Authority (SPA) could hold an auction at which countries would submit the price (per migrant or per migratory family) at which they will undertake the task of relocating migrants. The conditions of the contract would be specified, including the kind and location of

  11. Conservation physiology of animal migration

    Science.gov (United States)

    Lennox, Robert J.; Chapman, Jacqueline M.; Souliere, Christopher M.; Tudorache, Christian; Wikelski, Martin; Metcalfe, Julian D.; Cooke, Steven J.

    2016-01-01

    Migration is a widespread phenomenon among many taxa. This complex behaviour enables animals to exploit many temporally productive and spatially discrete habitats to accrue various fitness benefits (e.g. growth, reproduction, predator avoidance). Human activities and global environmental change represent potential threats to migrating animals (from individuals to species), and research is underway to understand mechanisms that control migration and how migration responds to modern challenges. Focusing on behavioural and physiological aspects of migration can help to provide better understanding, management and conservation of migratory populations. Here, we highlight different physiological, behavioural and biomechanical aspects of animal migration that will help us to understand how migratory animals interact with current and future anthropogenic threats. We are in the early stages of a changing planet, and our understanding of how physiology is linked to the persistence of migratory animals is still developing; therefore, we regard the following questions as being central to the conservation physiology of animal migrations. Will climate change influence the energetic costs of migration? Will shifting temperatures change the annual clocks of migrating animals? Will anthropogenic influences have an effect on orientation during migration? Will increased anthropogenic alteration of migration stopover sites/migration corridors affect the stress physiology of migrating animals? Can physiological knowledge be used to identify strategies for facilitating the movement of animals? Our synthesis reveals that given the inherent challenges of migration, additional stressors derived from altered environments (e.g. climate change, physical habitat alteration, light pollution) or interaction with human infrastructure (e.g. wind or hydrokinetic turbines, dams) or activities (e.g. fisheries) could lead to long-term changes to migratory phenotypes. However, uncertainty remains

  12. Recovery Migration After Hurricanes Katrina and Rita: Spatial Concentration and Intensification in the Migration System.

    Science.gov (United States)

    Curtis, Katherine J; Fussell, Elizabeth; DeWaard, Jack

    2015-08-01

    Changes in the human migration systems of the Gulf of Mexico coastline counties affected by Hurricanes Katrina and Rita provide an example of how climate change may affect coastal populations. Crude climate change models predict a mass migration of "climate refugees," but an emerging literature on environmental migration suggests that most migration will be short-distance and short-duration within existing migration systems, with implications for the population recovery of disaster-stricken places. In this research, we derive a series of hypotheses on recovery migration predicting how the migration system of hurricane-affected coastline counties in the Gulf of Mexico was likely to have changed between the pre-disaster and the recovery periods. We test these hypotheses using data from the Internal Revenue Service on annual county-level migration flows, comparing the recovery period migration system (2007-2009) with the pre-disaster period (1999-2004). By observing county-to-county ties and flows, we find that recovery migration was strong: the migration system of the disaster-affected coastline counties became more spatially concentrated, while flows within it intensified and became more urbanized. Our analysis demonstrates how migration systems are likely to be affected by the more intense and frequent storms anticipated by climate change scenarios, with implications for the population recovery of disaster-affected places.

  13. Migration of cesium-137 through sandy soil layer effect of fine silt on migration

    International Nuclear Information System (INIS)

    Ohnuki, Toshihiko; Wadachi, Yoshiki

    1983-01-01

    The migration of 137 Cs through sandy soil layer was studied with consideration of the migration of fine silt by column method. It was found that a portion of fine silt migrated through the soil layer accompanying with 137 Cs. The mathematical migration model of 137 Cs involved the migration of fine silt through such soil layer was presented. This model gave a good accordance between calculated concentration distribution curve in sandy soil layer and effluent curve and observed those. So, this model seems to be advanced one for evaluating migration of 137 Cs in sandy soil layer with silt. (author)

  14. Migrating C/C++ Software to Mobile Platforms in the ADM Context

    Directory of Open Access Journals (Sweden)

    Liliana Martinez

    2017-03-01

    Full Text Available Software technology is constantly evolving and therefore the development of applications requires adapting software components and applications in order to be aligned to new paradigms such as Pervasive Computing, Cloud Computing and Internet of Things. In particular, many desktop software components need to be migrated to mobile technologies. This migration faces many challenges due to the proliferation of different mobile platforms. Developers usually make applications tailored for each type of device expending time and effort. As a result, new programming languages are emerging to integrate the native behaviors of the different platforms targeted in development projects. In this direction, the Haxe language allows writing mobile applications that target all major mobile platforms. Novel technical frameworks for information integration and tool interoperability such as Architecture-Driven Modernization (ADM proposed by the Object Management Group (OMG can help to manage a huge diversity of mobile technologies. The Architecture-Driven Modernization Task Force (ADMTF was formed to create specifications and promote industry consensus on the modernization of existing applications. In this work, we propose a migration process from C/C++ software to different mobile platforms that integrates ADM standards with Haxe. We exemplify the different steps of the process with a simple case study, the migration of “the Set of Mandelbrot” C++ application. The proposal was validated in Eclipse Modeling Framework considering that some of its tools and run-time environments are aligned with ADM standards.

  15. MORPION: a fast hardware processor for straight line finding in MWPC

    International Nuclear Information System (INIS)

    Mur, M.

    1980-02-01

    A fast hardware processor for straight line finding in MWPC has been built in Saclay and successfully operated in the NA3 experiment at CERN. We give the motivations to build this processor, and describe the hardware implementation of the line finding algorithm. Finally its use and performance in NA3 are described

  16. Hyper-V for VMware administrators migration, coexistence, and management

    CERN Document Server

    Posey, Brien

    2015-01-01

    Learn to deploy and support Hyper-V, building on what you know about VMware's vSphere. Whether you're looking to run both hypervisors in parallel or migrate completely, Hyper-V for VMware Administrators has everything you need to get started. The book begins with an overview of Hyper-V basics, including common management tasks such as creating a virtual machine and building a virtual network. You'll learn how to deploy a failover cluster to protect against the risk of Hyper-V becoming a single point of failure, and how to make virtual machines fault tolerant. System Center Virtual Machine Ma

  17. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  18. South-South Migration and Remittances

    OpenAIRE

    Ratha, Dilip; Shaw, William

    2007-01-01

    South-South Migration and Remittances reports on preliminary results from an ongoing effort to improve data on bilateral migration stocks. It sets out some working hypotheses on the determinants and socioeconomic implications of South-South migration. Contrary to popular perception that migration is mostly a South-North phenomenon, South-South migration is large. Available data from nation...

  19. Total knee arthroplasty using patient-specific blocks after prior femoral fracture without hardware removal

    Directory of Open Access Journals (Sweden)

    Raju Vaishya

    2018-01-01

    Full Text Available Background: The options to perform total knee arthroplasty (TKA with retained hardware in femur are mainly – removal of hardware, use of extramedullary guide, or computer-assisted surgery. Patient-specific blocks (PSBs have been introduced with many potential advantages, but their use in retained hardware has not been adequately explored. The purpose of the present study was to outline and assess the usefulness of the PSBs in performing TKA in patients with retained femoral hardware. Materials and Materials and Methods: Nine patients with retained femoral hardware underwent TKA using PSBs. All the surgeries were performed by the same surgeon using same implants. Nine cases (7 males and 2 females out of total of 120 primary TKA had retained hardware. The average age of the patients was 60.55 years. The retained hardware were 6 patients with nails, 2 with plates and one patient had screws. Out of the nine cases, only one patient needed removal of a screw which was hindering placement of pin for the PSB. Results: All the patients had significant improvement in their Knee Society Score (KSS which improved from 47.0 to postoperative KSS of 86.77 (P < 0.00. The mechanical axis was significantly improved (P < 0.03 after surgery. No patient required blood transfusion and the average tourniquet time was 41 min. Conclusion: TKA using PSBs is useful and can be used in patients with retained hardware with good functional and radiological outcome.

  20. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Prestack depth migration

    International Nuclear Information System (INIS)

    Postma, R.W.

    1991-01-01

    Two lines form the southern North Sea, with known velocity inhomogeneities in the overburden, have been pre-stack depth migrated. The pre-stack depth migrations are compared with conventional processing, one with severe distortions and one with subtle distortions on the conventionally processed sections. The line with subtle distortions is also compared with post-stack depth migration. The results on both lines were very successful. Both have already influenced drilling decisions, and have caused a modification of structural interpretation in the respective areas. Wells have been drilled on each of the lines, and well tops confirm the results. In fact, conventional processing led to incorrect locations for the wells, both of which were dry holes. The depth migrated sections indicate the incorrect placement, and on one line reveals a much better drilling location. This paper reports that even though processing costs are high for pre-stack depth migration, appropriate use can save millions of dollars in dry-hole expense

  2. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  3. Indonesia's migration transition.

    Science.gov (United States)

    Hugo, G

    1995-01-01

    This article describes population movements in Indonesia in the context of rapid and marked social and economic change. Foreign investment in Indonesia is increasing, and global mass media is available to many households. Agriculture is being commercialized, and structural shifts are occurring in the economy. Educational levels are increasing, and women's role and status are shifting. Population migration has increased over the decades, both short and long distance, permanent and temporary, legal and illegal, and migration to and between urban areas. This article focuses specifically on rural-to-urban migration and international migration. Population settlements are dense in the agriculturally rich inner areas of Java, Bali, and Madura. Although the rate of growth of the gross domestic product was 6.8% annually during 1969-94, the World Bank ranked Indonesia as a low-income economy in 1992 because of the large population size. Income per capita is US $670. Indonesia is becoming a large exporter of labor to the Middle East, particularly women. The predominance of women as overseas contract workers is changing women's role and status in the family and is controversial due to the cases of mistreatment. Malaysia's high economic growth rate of over 8% per year means an additional 1.3 million foreign workers and technicians are needed. During the 1980s urban growth increased at a very rapid rate. Urban growth tended to occur along corridors and major transportation routes around urban areas. It is posited that most of the urban growth is due to rural-to-urban migration. Data limitations prevent an exact determination of the extent of rural-to-urban migration. More women are estimated to be involved in movements to cities during the 1980s compared to the 1970s. Recruiters and middlemen have played an important role in rural-to-urban migration and international migration.

  4. The human factors and job task analysis in nuclear power plant operation

    International Nuclear Information System (INIS)

    Stefanescu, Petre; Mihailescu, Nicolae; Dragusin, Octavian

    1999-01-01

    After a long period of time, during the development of the NPP technology, where the plant hardware has been considered to be the main factor for a safe, reliable and economic operation, the industry is now changing to an adequate responsibility of plant hardware and operation. Since the human factors has been not discussed methodically so far, there is still a lack of improved classification systems for human errors as well as a lack of methods for the systematic approach in designing the operator's working system, as for instance by using the job task analysis (J.T.A.). The J.T.A. appears to be an adequate method to study the human factor in the nuclear power plant operation, enabling an easy conversion to operational improvements. While the results of the analysis of human errors tell 'what' is to be improved, the J.T.A. shows 'how' to improve, for increasing the quality of the work and the safety of the operator's working system. The paper analyses the issue of setting the task and displays four criteria used to select aspects in NPP operation which require special consideration as personal training, design of control room, content and layout of the procedure manual, or organizing the operating personnel. The results are given as three tables giving: 1- Evaluation of deficiencies in the Working System; 2- Evaluation of the Deficiencies of Operator's Disposition; 3- Evaluation of the Mental Structure of Operation

  5. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  6. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  7. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  8. Recovery Migration after Hurricanes Katrina and Rita: Spatial Concentration and Intensification in the Migration System

    Science.gov (United States)

    Fussell, Elizabeth; DeWaard, Jack

    2015-01-01

    Changes in the human migration systems of Hurricane Katrina- and Rita-affected Gulf of Mexico coastline counties provide an example of how climate change may affect coastal populations. Crude climate change models predict a mass migration of “climate refugees,” but an emerging literature on environmental migration suggests most migration will be short-distance and short-duration within existing migration systems, with implications for the population recovery of disaster-struck places. In this research, we derive a series of hypotheses on recovery migration predicting how the migration system of hurricane-affected coastline counties in the Gulf of Mexico was likely to have changed between the pre-disaster and the recovery periods. We test these hypotheses using data from the Internal Revenue Service on annual county-level migration flows, comparing the recovery period migration system (2007–2009) to the pre-disaster period (1999–2004). By observing county-to-county ties and flows we find that recovery migration was strong, as the migration system of the disaster-affected coastline counties became more spatially concentrated while flows within it intensified and became more urbanized. Our analysis demonstrates how migration systems are likely to be affected by the more intense and frequent storms anticipated by climate change scenarios with implications for the population recovery of disaster-affected places. PMID:26084982

  9. [Migration and urbanization in the Sahel. Consequences of the Sahelian migrations].

    Science.gov (United States)

    Traore, S

    1997-10-01

    The consequences of Sahelian migration are multiple and diverse. In rural areas there may be a loss of income in the short run and a reduced possibility of development in the long run. Apart from its implications for urban growth, Sahelian migration may have four series of consequences in the places of origin. In detaching peasants from their lands, migration may contribute to loss of appreciation and reverence for the lands. Attachment to the lands of the ancestors loses its meaning as soon as questions of survival or economic rationality are raised. Migration contributes to the restructuring of the societies of origin. Increasing monetarization of market relations and introduction of new needs create new norms that favor stronger integration into the world economy. Migration may cause a decline in production because of the loss of the most active population, and it changes the age and sex distribution of households and usually increases their dependency burden. The effects on fertility and mortality are less clear. The effects of migration on the zones of arrival in the Sahel depend on the type of area. Conflicts between natives and in-migrants are common in rural-rural migration. Degradation of land may result from the increased demands placed upon it. Migrants to cities in Africa, and especially in the Sahel, appear to conserve their cultural values and to transplant and reinterpret their village rules of solidarity.

  10. Stereotactic core needle breast biopsy marker migration: An analysis of factors contributing to immediate marker migration.

    Science.gov (United States)

    Jain, Ashali; Khalid, Maria; Qureshi, Muhammad M; Georgian-Smith, Dianne; Kaplan, Jonah A; Buch, Karen; Grinstaff, Mark W; Hirsch, Ariel E; Hines, Neely L; Anderson, Stephan W; Gallagher, Katherine M; Bates, David D B; Bloch, B Nicolas

    2017-11-01

    To evaluate breast biopsy marker migration in stereotactic core needle biopsy procedures and identify contributing factors. This retrospective study analyzed 268 stereotactic biopsy markers placed in 263 consecutive patients undergoing stereotactic biopsies using 9G vacuum-assisted devices from August 2010-July 2013. Mammograms were reviewed and factors contributing to marker migration were evaluated. Basic descriptive statistics were calculated and comparisons were performed based on radiographically-confirmed marker migration. Of the 268 placed stereotactic biopsy markers, 35 (13.1%) migrated ≥1 cm from their biopsy cavity. Range: 1-6 cm; mean (± SD): 2.35 ± 1.22 cm. Of the 35 migrated biopsy markers, 9 (25.7%) migrated ≥3.5 cm. Patient age, biopsy pathology, number of cores, and left versus right breast were not associated with migration status (P> 0.10). Global fatty breast density (P= 0.025) and biopsy in the inner region of breast (P = 0.031) were associated with marker migration. Superior biopsy approach (P= 0.025), locally heterogeneous breast density, and t-shaped biopsy markers (P= 0.035) were significant for no marker migration. Multiple factors were found to influence marker migration. An overall migration rate of 13% supports endeavors of research groups actively developing new biopsy marker designs for improved resistance to migration. • Breast biopsy marker migration is documented in 13% of 268 procedures. • Marker migration is affected by physical, biological, and pathological factors. • Breast density, marker shape, needle approach etc. affect migration. • Study demonstrates marker migration prevalence; marker design improvements are needed.

  11. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  12. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  13. Hardwares e sistemas multiagente: um estudo sobre arquiteturas híbridas

    Directory of Open Access Journals (Sweden)

    Rafhael Rodrigues Cunha

    2015-05-01

    Full Text Available Este artigo apresenta três estudos de caso sobre a aplicabilidade de arquiteturas de hardware reconfiguráveis, como FPGA, voltadas à utilização em sistemas multiagentes. Feita uma análise visando à elucidação dos resultados e das contribuições que os estudos proporcionaram aos autores, observa-se que o desenvolvimento de sistemas inteligentes depende cada vez mais de uma programação que explore o hardware ao máximo. Esse desfecho torna o uso de hardwares reconfiguráveis o mais aconselhável quando problemas computacionais complexos demandam respostas rápidas e eficientes, como nos casos estudados.

  14. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  15. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  16. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  17. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  18. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  19. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  20. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  1. Return migration as failure or success? : The determinants of return migration intentions among Moroccan migrants in Europe

    NARCIS (Netherlands)

    de Haas, H.; Fokkema, T.; Fihri, M.F.

    2015-01-01

    Different migration theories generate competing hypotheses with regard to determinants of return migration. While neoclassical migration theory associates migration to the failure to integrate at the destination, the new economics of labour migration sees return migration as the logical stage after

  2. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. A photovoltaic source I/U model suitable for hardware in the loop application

    Directory of Open Access Journals (Sweden)

    Stala Robert

    2017-12-01

    Full Text Available This paper presents a novel, low-complexity method of simulating PV source characteristics suitable for real-time modeling and hardware implementation. The application of the suitable model of the PV source as well as the model of all the PV system components in a real-time hardware gives a safe, fast and low cost method of testing PV systems. The paper demonstrates the concept of the PV array model and the hardware implementation in FPGAs of the system which combines two PV arrays. The obtained results confirm that the proposed model is of low complexity and can be suitable for hardware in the loop (HIL tests of the complex PV system control, with various arrays operating under different conditions.

  4. Field programmable gate array based hardware implementation of a gradient filter for edge detection in colour images with subpixel precision

    International Nuclear Information System (INIS)

    Schellhorn, M; Rosenberger, M; Correns, M; Blau, M; Goepfert, A; Rueckwardt, M; Linss, G

    2010-01-01

    Within the field of industrial image processing the use of colour cameras becomes ever more common. Increasingly the established black and white cameras are replaced by economical single-chip colour cameras with Bayer pattern. The use of the additional colour information is particularly important for recognition or inspection. Become interesting however also for the geometric metrology, if measuring tasks can be solved more robust or more exactly. However only few suitable algorithms are available, in order to detect edges with the necessary precision. All attempts require however additional computation expenditure. On the basis of a new filter for edge detection in colour images with subpixel precision, the implementation on a pre-processing hardware platform is presented. Hardware implemented filters offer the advantage that they can be used easily with existing measuring software, since after the filtering a single channel image is present, which unites the information of all colour channels. Advanced field programmable gate arrays represent an ideal platform for the parallel processing of multiple channels. The effective implementation presupposes however a high programming expenditure. On the example of the colour filter implementation, arising problems are analyzed and the chosen solution method is presented.

  5. Migration and revolution

    Directory of Open Access Journals (Sweden)

    Nando Sigona

    2012-06-01

    Full Text Available The Arab Spring has not radically transformed migration patterns in the Mediterranean, and the label ‘migration crisis’ does not do justice to the composite and stratified reality.

  6. Migration = cloning; aliasiing

    DEFF Research Database (Denmark)

    Hüttel, Hans; Kleist, Josva; Nestmann, Uwe

    1999-01-01

    In Obliq, a lexically scoped, distributed, object-oriented programming language, object migration was suggested as the creation of a copy of an object’s state at the target site, followed by turning the object itself into an alias, also called surrogate, for the remote copy. We consider the creat......In Obliq, a lexically scoped, distributed, object-oriented programming language, object migration was suggested as the creation of a copy of an object’s state at the target site, followed by turning the object itself into an alias, also called surrogate, for the remote copy. We consider...... the creation of object surrogates as an abstraction of the abovementioned style of migration. We introduce Øjeblik, a distribution-free subset of Obliq, and provide three different configuration-style semantics, which only differ in the respective aliasing model. We show that two of the semantics, one of which...... matches Obliq’s implementation, render migration unsafe, while our new proposal for a third semantics is provably safe. Our work suggests a straightforward repair of Obliq’s aliasing model such that it allows programs to safely migrate objects....

  7. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  8. Multiple-task real-time PDP-15 operating system for data acquisition and analysis

    International Nuclear Information System (INIS)

    Myers, W.R.

    1974-01-01

    The RAMOS operating system is capable of handling up to 72 simultaneous tasks in an interrupt-driven environment. The minimum viable hardware configuration includes a Digital Equipment Corporation PDP-15 computer with 16384 words of memory, extended arithmetic element, automatic priority interrupt, a 256K-word RS09 DECdisk, two DECtape transports, and an alphanumeric keyboard/typer. The monitor executes major tasks by loading disk-resident modules to memory for execution; modules are written in a format that allows page-relocation by the monitor, and can be loaded into any available page. All requests for monitor service by tasks, including input/output, floating point arithmetic, request for additional memory, task initiation, etc., are implemented by privileged monitor calls (CAL). All IO device handlers are capable of queuing requests for service, allowing several tasks ''simultaneous'' use of all resources. All alphanumeric IO (including the PC05) is completely buffered and handled by a single multiplexing routine. The floating point arithmetic software is re-entrant to all operating modules and includes matrix arithmetic functions. One of the system tasks can be a ''batch'' job, controlled by simulating an alphanumeric command terminal through cooperative functions of the disk handler and alphanumeric device software. An alphanumeric control sequence may be executed, automatically accessing disk-resident tasks in any prescribed order; a library of control sequences is maintained on bulk storage for access by the monitor. (auth)

  9. Hardware detection and parameter tuning method for speed control system of PMSM

    Science.gov (United States)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    In this paper, the development of permanent magnet synchronous motor AC speed control system is taken as an example, aiming to expound the principle and parameter setting method of the system hardware, and puts forward the method of using software or hardware to eliminate the problem.

  10. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  11. Migration into art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    This book addresses a topic of increasing importance to artists, art historians and scholars of cultural studies, migration studies and international relations: migration as a profoundly transforming force that has remodelled artistic and art institutional practices across the world. It explores...... contemporary art's critical engagement with migration and globalisation as a key source for improving our understanding of how these processes transform identities, cultures, institutions and geopolitics. The author explores three interwoven issues of enduring interest: identity and belonging, institutional...

  12. What's driving migration?

    Science.gov (United States)

    Kane, H

    1995-01-01

    During the 1990s investment in prevention of international or internal migration declined, and crisis intervention increased. The budgets of the UN High Commissioner for Refugees and the UN Development Program remained about the same. The operating assumption is that war, persecution, famine, and environmental and social disintegration are inevitable. Future efforts should be directed to stabilizing populations through investment in sanitation, public health, preventive medicine, land tenure, environmental protection, and literacy. Forces pushing migration are likely to increase in the future. Forces include depletion of natural resources, income disparities, population pressure, and political disruption. The causes of migration are not constant. In the past, migration occurred during conquests, settlement, intermarriage, or religious conversion and was a collective movement. Current migration involves mass movement of individuals and the struggle to survive. There is new pressure to leave poor squatter settlements and the scarcities in land, water, and food. The slave trade between the 1500s and the 1800s linked continents, and only 2-3 million voluntarily crossed national borders. Involuntary migration began in the early 1800s when European feudal systems were in a decline, and people sought freedom. Official refugees, who satisfy the strict 1951 UN definition, increased from 15 million in 1980 to 23 million in 1990 but remained a small proportion of international migrants. Much of the mass movement occurs between developing countries. Migration to developed countries is accompanied by growing intolerance, which is misinformed. China practices a form of "population transfer" in Tibet in order to dilute Tibetan nationalism. Colonization of countries is a new less expensive form of control over territory. Eviction of minorities is another popular strategy in Iraq. Public works projects supported by foreign aid displace millions annually. War and civil conflicts

  13. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  14. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  15. Biology task group

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The accomplishments of the task group studies over the past year are reviewed. The purposes of biological investigations, in the context of subseabed disposal, are: an evaluation of the dose to man; an estimation of effects on the ecosystem; and an estimation of the influence of organisms on and as barriers to radionuclide migration. To accomplish these ends, the task group adopted the following research goals: (1) acquire more data on biological accumulation of specific radionuclides, such as those of Tc, Np, Ra, and Sr; (2) acquire more data on transfer coefficients from sediment to organism; (3) Calculate mass transfer rates, construct simple models using them, and estimate collective dose commitment; (4) Identify specific pathways or transfer routes, determine the rates of transfer, and make dose limit calculations with simple models; (5) Calculate dose rates to and estimate irradiation effects on the biota as a result of waste emplacement, by reference to background irradiation calculations. (6) Examine the effect of the biota on altering sediment/water radionuclide exchange; (7) Consider the biological data required to address different accident scenarios; (8) Continue to provide the basic biological information for all of the above, and ensure that the system analysis model is based on the most realistic and up-to-date concepts of marine biologists; and (9) Ensure by way of free exchange of information that the data used in any model are the best currently available

  16. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  17. Recovery Migration to the City of New Orleans after Hurricane Katrina: A Migration Systems Approach.

    Science.gov (United States)

    Fussell, Elizabeth; Curtis, Katherine J; Dewaard, Jack

    2014-03-01

    Hurricane Katrina's effect on the population of the City of New Orleans provides a model of how severe weather events, which are likely to increase in frequency and strength as the climate warms, might affect other large coastal cities. Our research focuses on changes in the migration system - defined as the system of ties between Orleans Parish and all other U.S. counties - between the pre-disaster (1999-2004) and recovery (2007-2009) periods. Using Internal Revenue Service county-to-county migration flow data, we find that in the recovery period Orleans Parish increased the number of migration ties with and received larger migration flows from nearby counties in the Gulf of Mexico coastal region, thereby spatially concentrating and intensifying the in-migration dimension of this predominantly urban system, while the out-migration dimension contracted and had smaller flows. We interpret these changes as the migration system relying on its strongest ties to nearby and less damaged counties to generate recovery in-migration.

  18. Introduction to 6800/6802 microprocessor systems hardware, software and experimentation

    CERN Document Server

    Simpson, Robert J

    1987-01-01

    Introduction to 6800/6802 Microprocessor Systems: Hardware, Software and Experimentation introduces the reader to the features, characteristics, operation, and applications of the 6800/6802 microprocessor and associated family of devices. Many worked examples are included to illustrate the theoretical and practical aspects of the 6800/6802 microprocessor.Comprised of six chapters, this book begins by presenting several aspects of digital systems before introducing the concepts of fetching and execution of a microprocessor instruction. Details and descriptions of hardware elements (MPU, RAM, RO

  19. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  20. Regional Redistribution and Migration

    DEFF Research Database (Denmark)

    Manasse, Paolo; Schultz, Christian

    We study a model with free migration between a rich and a poor region. Since there is congestion, the rich region has an incentive to give the poor region a transfer in order to reduce immigration. Faced with free migration, the rich region voluntarily chooses a transfer, which turns out...... to be equal to that a social planner would choose. Provided migration occurs in equilibrium, this conclusion holds even in the presence of moderate mobility costs. However, large migration costs will lead to suboptimal transfers in the market solution...

  1. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  2. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2017-01-01

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  3. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  4. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  5. Malaysia and forced migration

    Directory of Open Access Journals (Sweden)

    Arzura Idris

    2012-06-01

    Full Text Available This paper analyzes the phenomenon of “forced migration” in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants. It considers forced migration as an event hosting multifaceted issues related and relevant to forced migrants and suggests that Malaysia has been preoccupied with the issue of forced migration movements. This is largely seen in various responses invoked from Malaysia due to “south-south forced migration movements.” These responses are, however, inadequate in terms of commitment to the international refugee regime. While Malaysia did respond to economic and migration challenges, the paper asserts that such efforts are futile if she ignores issues critical to forced migrants.

  6. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  7. Hardware dependencies of GPU-accelerated beamformer performances for microwave breast cancer detection

    Directory of Open Access Journals (Sweden)

    Salomon Christoph J.

    2016-09-01

    Full Text Available UWB microwave imaging has proven to be a promising technique for early-stage breast cancer detection. The extensive image reconstruction time can be accelerated by parallelizing the execution of the underlying beamforming algorithms. However, the efficiency of the parallelization will most likely depend on the grade of parallelism of the imaging algorithm and of the utilized hardware. This paper investigates the dependencies of two different beamforming algorithms on multiple hardware specification of several graphics boards. The parallel implementation is realized by using NVIDIA’s CUDA. Three conclusions are drawn about the behavior of the parallel implementation and how to efficiently use the accessible hardware.

  8. Synthetic seismic monitoring using reverse-time migration and Kirchhoff migration for CO2 sequestration in Korea

    Science.gov (United States)

    Kim, W.; Kim, Y.; Min, D.; Oh, J.; Huh, C.; Kang, S.

    2012-12-01

    During last two decades, CO2 sequestration in the subsurface has been extensively studied and progressed as a direct tool to reduce CO2 emission. Commercial projects such as Sleipner, In Salah and Weyburn that inject more than one million tons of CO2 per year are operated actively as well as test projects such as Ketzin to study the behavior of CO2 and the monitoring techniques. Korea also began the CCS (CO2 capture and storage) project. One of the prospects for CO2 sequestration in Korea is the southwestern continental margin of Ulleung basin. To monitor the behavior of CO2 underground for the evaluation of stability and safety, several geophysical monitoring techniques should be applied. Among various geophysical monitoring techniques, seismic survey is considered as the most effective tool. To verify CO2 migration in the subsurface more effectively, seismic numerical simulation is an essential process. Furthermore, the efficiency of the seismic migration techniques should be investigated for various cases because numerical seismic simulation and migration test help us accurately interpret CO2 migration. In this study, we apply the reverse-time migration and Kirchhoff migration to synthetic seismic monitoring data generated for the simplified model based on the geological structures of Ulleung basin in Korea. Synthetic seismic monitoring data are generated for various cases of CO2 migration in the subsurface. From the seismic migration images, we can investigate CO2 diffusion patterns indirectly. From seismic monitoring simulation, it is noted that while the reverse-time migration generates clear subsurface images when subsurface structures are steeply dipping, Kirchhoff migration has an advantage in imaging horizontal-layered structures such as depositional sediments appearing in the continental shelf. The reverse-time migration and Kirchhoff migration present reliable subsurface images for the potential site characterized by stratigraphical traps. In case of

  9. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  10. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  11. Managing migration: the Brazilian case

    OpenAIRE

    Eduardo L. G. Rios-Neto

    2005-01-01

    The objective of this paper is to present the Brazilian migration experience and its relationship with migration management. The article is divided into three parts. First, it reviews some basic facts regarding Brazilian immigration and emigration processes. Second, it focuses on some policy and legal issues related to migration. Finally, it addresses five issues regarding migration management in Brazil.

  12. Migration og etnicitet

    DEFF Research Database (Denmark)

    Christiansen, Connie Carøe

    2004-01-01

    Migration og etnicitet er aktuelle og forbundne fænomener, idet migration øger berøringsfladerne mellem befolkningsgrupper. Etniciteter formes i takt med at grænser drages imellem disse grupper. Imod moderniserings-teoriernes forventning forsvandt etnicitet ikke som en traditionel eller oprindelig...... måde at skabe tilhørsforhold på; globalt set fremstår vor tid istedet som en "migrationens tidsalder", der tilsyneladende også er en tidsalder, hvor kulturelle særtræk, i form af etnicitet, udgør vigtige linjer, hvorefter grupper skilller sig ud fra hinanden. Både migration og etnicitet bringer fokus...... den finder sted i modtagerlandet, men nyere perspektiver på migration, som begreber om medborgerskab, transnationalisme og diaspora er eksponenter for, søger udover den nationalstatslige ramme og inddrager konsekvenserne af migrationen for afsenderlande....

  13. Multi-loop PWR modeling and hardware-in-the-loop testing using ACSL

    International Nuclear Information System (INIS)

    Thomas, V.M.; Heibel, M.D.; Catullo, W.J.

    1989-01-01

    Westinghouse has developed an Advanced Digital Feedwater Control System (ADFCS) which is aimed at reducing feedwater related reactor trips through improved control performance for pressurized water reactor (PWR) power plants. To support control system setpoint studies and functional design efforts for the ADFCS, an ACSL based model of the nuclear steam supply system (NSSS) of a Westinghouse (PWR) was generated. Use of this plant model has been extended from system design to system testing through integration of the model into a Hardware-in-Loop test environment for the ADFCS. This integration includes appropriate interfacing between a Gould SEL 32/87 computer, upon which the plant model executes in real time, and the Westinghouse Distributed Processing family (WDPF) test hardware. A development program has been undertaken to expand the existing ACSL model to include capability to explicitly model multiple plant loops, steam generators, and corresponding feedwater systems. Furthermore, the program expands the ADFCS Hardware-in-Loop testing to include the multi-loop plant model. This paper provides an overview of the testing approach utilized for the ADFCS with focus on the role of Hardware-in-Loop testing. Background on the plant model, methodology and test environment is also provided. Finally, an overview is presented of the program to expand the model and associated Hardware-in-Loop test environment to handle multiple loops

  14. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  15. A Cost-Effective Approach to Hardware-in-the-Loop Simulation

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Melters; Hansen, M. R.; Ballebye, M.

    2012-01-01

    This paper presents an approach for developing cost effective hardware-in-the- loop (HIL) simulation platforms for the use in controller software test and development. The approach is aimed at the many smaller manufacturers of e.g. mobile hydraulic machinery, which often do not have very advanced...... testing facilities at their disposal. A case study is presented where a HIL simulation platform is developed for the controller of a truck mounted loader crane. The total expenses in hardware and software is less than 10.000$....

  16. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  17. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  18. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  19. Questions of migration and belonging: understandings of migration under neoliberalism in Ecuador.

    Science.gov (United States)

    Lawson, V

    1999-01-01

    This paper explores alternative understandings and experiences of migration under neoliberalism in Ecuador. Through the case study, the study examines migrants' multiple motivations for mobility and their ambivalence toward the process. Insights from the transnational migration literature were drawn in order to think through the implications of an increasingly contradictory context of economic modernization and its impact upon the sense of possibilities and belonging of migrants. In-depth interviews with urban-destined migrants in Ecuador were drawn to argue that mobility produces ambivalent development subjects. This argument is developed in three sections. First, the paper centers on the epistemological and theoretical basis for the relevance of migrant narratives in extending theorizations of migration. Second, in-depth interviews with migrants to Quito are drawn to explore migrants' sense of belonging and regional affiliation, identity formation through migration, and experiences of alienation and disruption in their lives. Lastly, this paper concludes with a retheorization of the role of migration places in the migrant identity construction.

  20. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    Directory of Open Access Journals (Sweden)

    Wong Weng-Fai

    2011-01-01

    Full Text Available Abstract Advances in technology are making it possible to run three-dimensional (3D graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API, device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC accelerator using transaction-level modeling (TLM. This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  1. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  2. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-01-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally

  3. Control/interlock/display system for EBT-P using commercially-available hardware and firmware

    International Nuclear Information System (INIS)

    Schmitt, R.J.

    1983-01-01

    For the EBT-P project, alternative commercially-available hardware, software and firmware have been employed for control, interlock and data display functions. This paper describes the criteria and rationale used to select that commercial equipment and discusses the important features of the equipment chosen, especially programmable controllers. Additional discussion is centered on interface problems which are encountered upon attempts to integrate equipment from several vendors. Some solutions to these problems are discussed. Details of software and hardware performance during tests are presented. The extent to which the EBT-P hardware and software configuration addresses and resolves various issues is discussed. Several areas have been uncovered in which relatively slight improvements/modifications of commercial programmable controller firmware would significantly improve the capability of this type of hardware in fusion control applications. These improvements are discussed in detail

  4. The European Crisis and Migration to Germany: Expectations and the Diversion of Migration Flows

    OpenAIRE

    Brücker, Herbert; Bertoli, Simone; Fernández-Huertas Moraga, Jesús

    2013-01-01

    The European crisis has diverted migration flows away from countries affected by the recession towards Germany. The diversion process creates a challenge for traditional discrete-choice models that assume that only bilateral factors account for dyadic migration rates. This paper shows how taking into account the sequential nature of migration decisions leads to write the bilateral migration rate as a function of expectations about the evolution of economic conditions in alternative destinatio...

  5. Income Inequality and Migration in Vietnam

    OpenAIRE

    NGUYEN, Tien Dung

    2012-01-01

    In this paper, we have analyzed the recent trends in income inequality, internal and international migrations and investigated the impact of migration on income distribution in Vietnam. Our analysis shows that the effects of migration on income inequality vary with different types of migration, depending on who migrate and where they migrate. Foreign remittances tend to flow toward more affluent households, and they increase income inequality. By contrast, domestic remittances accrue more to ...

  6. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  7. Process Control Migration of 50 LPH Helium Liquefier

    Science.gov (United States)

    Panda, U.; Mandal, A.; Das, A.; Behera, M.; Pal, Sandip

    2017-02-01

    Two helium liquefier/refrigerators are operational at VECC while one is dedicated for the Superconducting Cyclotron. The first helium liquefier of 50 LPH capacity from Air Liquide has already completed fifteen years of operation without any major trouble. This liquefier is being controlled by Eurotherm PC3000 make PLC. This PLC has become obsolete since last seven years or so. Though we can still manage to run the PLC system with existing spares, risk of discontinuation of the operation is always there due to unavailability of spare. In order to eliminate the risk, an equivalent PLC control system based on Siemens S7-300 was thought of. For smooth migration, total programming was done keeping the same field input and output interface, nomenclature and graphset. New program is a mix of S7-300 Graph, STL and LAD languages. One to one program verification of the entire process graph was done manually. The total program was run in simulation mode. Matlab mathematical model was also used for plant control simulations. EPICS based SCADA was used for process monitoring. As of now the entire hardware and software is ready for direct replacement with minimum required set up time.

  8. Development of a hardware-in-loop attitude control simulator for a CubeSat satellite

    Science.gov (United States)

    Tapsawat, Wittawat; Sangpet, Teerawat; Kuntanapreeda, Suwat

    2018-01-01

    Attitude control is an important part in satellite on-orbit operation. It greatly affects the performance of satellites. Testing of an attitude determination and control subsystem (ADCS) is very challenging since it might require attitude dynamics and space environment in the orbit. This paper develops a low-cost hardware-in-loop (HIL) simulator for testing an ADCS of a CubeSat satellite. The simulator consists of a numerical simulation part, a hardware part, and a HIL interface hardware unit. The numerical simulation part includes orbital dynamics, attitude dynamics and Earth’s magnetic field. The hardware part is the real ADCS board of the satellite. The simulation part outputs satellite’s angular velocity and geomagnetic field information to the HIL interface hardware. Then, based on this information, the HIL interface hardware generates I2C signals mimicking the signals of the on-board rate-gyros and magnetometers and consequently outputs the signals to the ADCS board. The ADCS board reads the rate-gyro and magnetometer signals, calculates control signals, and drives the attitude actuators which are three magnetic torquers (MTQs). The responses of the MTQs sensed by a separated magnetometer are feedback to the numerical simulation part completing the HIL simulation loop. Experimental studies are conducted to demonstrate the feasibility and effectiveness of the simulator.

  9. Migration, Narration, Identity

    DEFF Research Database (Denmark)

    Leese, Peter

    (co-editor with Carly McLaughlin and Wladyslaw Witalisz) This book presents articles resulting from joint research on the representations of migration conducted in connection with the Erasmus Intensive Programme entitled «Migration and Narration» taught to groups of international students over...

  10. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    Science.gov (United States)

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  11. From migration to settlement: the pathways, migration modes and dynamics of neurons in the developing brain

    Science.gov (United States)

    HATANAKA, Yumiko; ZHU, Yan; TORIGOE, Makio; KITA, Yoshiaki; MURAKAMI, Fujio

    2016-01-01

    Neuronal migration is crucial for the construction of the nervous system. To reach their correct destination, migrating neurons choose pathways using physical substrates and chemical cues of either diffusible or non-diffusible nature. Migrating neurons extend a leading and a trailing process. The leading process, which extends in the direction of migration, determines navigation, in particular when a neuron changes its direction of migration. While most neurons simply migrate radially, certain neurons switch their mode of migration between radial and tangential, with the latter allowing migration to destinations far from the neurons’ site of generation. Consequently, neurons with distinct origins are intermingled, which results in intricate neuronal architectures and connectivities and provides an important basis for higher brain function. The trailing process, in contrast, contributes to the late stage of development by turning into the axon, thus contributing to the formation of neuronal circuits. PMID:26755396

  12. Evaluating the scalability of HEP software and multi-core hardware

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A

    2011-01-01

    As researchers have reached the practical limits of processor performance improvements by frequency scaling, it is clear that the future of computing lies in the effective utilization of parallel and multi-core architectures. Since this significant change in computing is well underway, it is vital for HEP programmers to understand the scalability of their software on modern hardware and the opportunities for potential improvements. This work aims to quantify the benefit of new mainstream architectures to the HEP community through practical benchmarking on recent hardware solutions, including the usage of parallelized HEP applications.

  13. Water system hardware and management rehabilitation: Qualitative evidence from Ghana, Kenya, and Zambia.

    Science.gov (United States)

    Klug, Tori; Shields, Katherine F; Cronk, Ryan; Kelly, Emma; Behnke, Nikki; Lee, Kristen; Bartram, Jamie

    2017-05-01

    Sufficient, safe, continuously available drinking water is important for human health and development, yet one in three handpumps in sub-Saharan Africa are non-functional at any given time. Community management, coupled with access to external technical expertise and spare parts, is a widely promoted model for rural water supply management. However, there is limited evidence describing how community management can address common hardware and management failures of rural water systems in sub-Saharan Africa. We identified hardware and management rehabilitation pathways using qualitative data from 267 interviews and 57 focus group discussions in Ghana, Kenya, and Zambia. Study participants were water committee members, community members, and local leaders in 18 communities (six in each study country) with water systems managed by a water committee and supported by World Vision (WV), an international non-governmental organization (NGO). Government, WV or private sector employees engaged in supporting the water systems were also interviewed. Inductive analysis was used to allow for pathways to emerge from the data, based on the perspectives and experiences of study participants. Four hardware rehabilitation pathways were identified, based on the types of support used in rehabilitation. Types of support were differentiated as community or external. External support includes financial and/or technical support from government or WV employees. Community actor understanding of who to contact when a hardware breakdown occurs and easy access to technical experts were consistent reasons for rapid rehabilitation for all hardware rehabilitation pathways. Three management rehabilitation pathways were identified. All require the involvement of community leaders and were best carried out when the action was participatory. The rehabilitation pathways show how available resources can be leveraged to restore hardware breakdowns and management failures for rural water systems in sub

  14. Migrations in Slovenian geography textbooks

    Directory of Open Access Journals (Sweden)

    Jurij Senegačnik

    2013-12-01

    Full Text Available In Slovenia, the migrations are treated in almost all geographical textbooks for different levels of education. In the textbooks for the elementary school from the sixth to ninth grade, students acquire knowledge of the migrations by the inductive approach. Difficulty level of treatment and quantity of information are increasing by the age level. In the grammar school program a trail of gaining knowledge on migration is deductive. Most attention is dedicated to migrations in general geography textbooks. The textbooks for vocational and technical school programs deal with migrations to a lesser extent and with different approaches.

  15. Palaearctic-African Bird Migration

    DEFF Research Database (Denmark)

    Iwajomo, Soladoye Babatola

    Bird migration has attracted a lot of interests over past centuries and the methods used for studying this phenomenon has greatly improved in terms of availability, dimension, scale and precision. In spite of the advancements, relatively more is known about the spring migration of trans-Saharan m......Bird migration has attracted a lot of interests over past centuries and the methods used for studying this phenomenon has greatly improved in terms of availability, dimension, scale and precision. In spite of the advancements, relatively more is known about the spring migration of trans...... of birds from Europe to Africa and opens up the possibility of studying intra-African migration. I have used long-term, standardized autumn ringing data from southeast Sweden to investigate patterns in biometrics, phenology and population trends as inferred from annual trapping totals. In addition, I...... in the population of the species. The papers show that adult and juvenile birds can use different migration strategies depending on time of season and prevailing conditions. Also, the fuel loads of some individuals were theoretically sufficient for a direct flight to important goal area, but whether they do so...

  16. Migration signatures across the decades: Net migration by age in U.S. counties, 1950-2010

    Directory of Open Access Journals (Sweden)

    Richelle L. Winkler

    2015-05-01

    Full Text Available Background: Migration is the primary population redistribution process in the United States. Selective migration by age, race/ethnic group, and spatial location governs population integration, affects community and economic development, contributes to land use change, and structures service needs. Objective: Delineate historical net migration patterns by age, race/ethnic, and rural-urban dimensions for United States counties. Methods: Net migration rates by age for all US counties are aggregated from 1950−2010, summarized by rural-urban location and compared to explore differential race/ethnic patterns of age-specific net migration over time. Results: We identify distinct age-specific net migration 'signatures' that are consistent over time within county types, but different by rural-urban location and race/ethnic group. There is evidence of moderate population deconcentration and diminished racial segregation between 1990 and 2010. This includes a net outflow of Blacks from large urban core counties to suburban and smaller metropolitan counties, continued Hispanic deconcentration, and a slowdown in White counterurbanization. Conclusions: This paper contributes to a fuller understanding of the complex patterns of migration that have redistributed the U.S. population over the past six decades. It documents the variability in county age-specific net migration patterns both temporally and spatially, as well as the longitudinal consistency in migration signatures among county types and race/ethnic groups.

  17. 48 CFR 1812.7000 - Prohibition on guaranteed customer bases for new commercial space hardware or services.

    Science.gov (United States)

    2010-10-01

    ... customer bases for new commercial space hardware or services. 1812.7000 Section 1812.7000 Federal... PLANNING ACQUISITION OF COMMERCIAL ITEMS Commercial Space Hardware or Services 1812.7000 Prohibition on guaranteed customer bases for new commercial space hardware or services. Public Law 102-139, title III...

  18. Growth in spaceflight hardware results in alterations to the transcriptome and proteome

    Science.gov (United States)

    Basu, Proma; Kruse, Colin P. S.; Luesse, Darron R.; Wyatt, Sarah E.

    2017-11-01

    The Biological Research in Canisters (BRIC) hardware has been used to house many biology experiments on both the Space Transport System (STS, commonly known as the space shuttle) and the International Space Station (ISS). However, microscopic examination of Arabidopsis seedlings by Johnson et al. (2015) indicated the hardware itself may affect cell morphology. The experiment herein was designed to assess the effects of the BRIC-Petri Dish Fixation Units (BRIC-PDFU) hardware on the transcriptome and proteome of Arabidopsis seedlings. To our knowledge, this is the first transcriptomic and proteomic comparison of Arabidopsis seedlings grown with and without hardware. Arabidopsis thaliana wild-type Columbia (Col-0) seeds were sterilized and bulk plated on forty-four 60 mm Petri plates, of which 22 were integrated into the BRIC-PDFU hardware and 22 were maintained in closed containers at Ohio University. Seedlings were grown for approximately 3 days, fixed with RNAlater® and stored at -80 °C prior to RNA and protein extraction, with proteins separated into membrane and soluble fractions prior to analysis. The RNAseq analysis identified 1651 differentially expressed genes; MS/MS analysis identified 598 soluble and 589 membrane proteins differentially abundant both at p < .05. Fold enrichment analysis of gene ontology terms related to differentially expressed transcripts and proteins highlighted a variety of stress responses. Some of these genes and proteins have been previously identified in spaceflight experiments, indicating that these genes and proteins may be perturbed by both conditions.

  19. The Great Migration.

    Science.gov (United States)

    Trotter, Joe William, Jr.

    2002-01-01

    Describes the migration of African Americans in the United States and the reasons why African Americans migrated from the south. Focuses on issues, such as the effect of World War I, the opportunities offered in the north, and the emergence of a black industrial working class. (CMK)

  20. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  1. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  2. Hardware-in-the-Loop Simulation for the Automatic Power Control System of Research Reactors

    International Nuclear Information System (INIS)

    Fikry, R.M.; Shehata, S.A.; Elaraby, S.M.; Mahmoud, M.I.; Elbardini, M.M.

    2009-01-01

    Designing and testing digital control system for any nuclear research reactor can be costly and time consuming. In this paper, a rapid, low-cost proto typing and testing procedure for digital controller design is proposed using the concept of Hardware-In- The-Loop (HIL). Some of the control loop components are real hardware components and thc others are simulated. First, the whole system is modeled and tested by Real- Time Simulation (RTS) using conventional simulation techniques such as MATLAB / SIMULINK. Second the Hardware-in-the-Ioop simulation is tested using Real-Time Windows Target in MATLAB and Visual C++. The control parts are included as hardware components which are the reactor control rod and its drivers. Two kinds of controllers are studied, Proportional derivative (PD) and Fuzzy controller, An experimental setup for the hardware used in HIL concept for the control of the nuclear research reactor has been realized. Experimental results are obtained and compared with the simulation results. The experimental results indicate the validation of HIL method in this domain

  3. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  4. Migrating and herniating hydatid cysts

    International Nuclear Information System (INIS)

    Koc, Zafer; Ezer, Ali

    2008-01-01

    Objective: To present the prevalence and imaging findings of patients with hydatid disease (HD) showing features of migration or herniation of the hydatid cysts (HCs) and underline the clinical significance of this condition. Materials and methods: Between May 2003 and June 2006, 212 patients with HD were diagnosed by abdomen and/or thorax CT, searched for migrating or herniating HC. Imaging findings of 7 patients (5 women, 2 men with an age range of 19-63 years; mean ± S.D., 44 ± 19 years) with HD showing transdiaphragmatic migration (6 subjects) or femoral herniation (1 subject) were evaluated. Diagnosis of all the patients were established by pathologic examination and migration or herniation was confirmed by surgery in all patients. Results: Liver HD were identified in 169 (79.7%) of 212 patients with HD. Transdiaphragmatic migration of HCs were identified in 6 (3.5%) of the 169 patients with liver HD. In one patient, femoral herniation of the retroperitoneal HC into the proximal anterior thigh was identified. All of these seven patients exhibiting migration or herniation of HCs had active HCs including 'daughter cysts'. Two patients had previous surgery because of liver HD and any supradiaphragmatic lesion was not noted before operation. Findings of migration or herniation were confirmed by surgery. Conclusion: Active HCs may show migration or herniation due to pressure difference between the anatomic cavities, and in some of the patients, by contribution of gravity. Previous surgery may be a complementary factor for migration as seen in two of our patients. The possibility of migration or herniation in patients with HD should be considered before surgery

  5. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  6. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  7. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accele......In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA......-based accelerator. Results show that significant speed-up can be obtained by the proposed acceleration framework on system-on-chips where reconfigurable fabric is placed next to the CPUs. The speed-up is due to both the intrinsic acceleration in the application-specific processors, and to the increased parallelism....

  8. Migration in Asia-Europe Relations

    DEFF Research Database (Denmark)

    Juego, Bonn

    2010-01-01

    There is a remarkable difference between viewing migration as a 'social integration' issue, on the one hand, and migration as a 'social relation'. The idea of ‘social integration’ has unrealistic assumptions that see migration as a one-way process, that societies and human relations are static......, and that migrants are mechanical. Policies that are founded on unrealistic assumptions are most likely to generate tensions, conflicts, and contradictions. For a migration process to succeed in forging social harmony and development, it is therefore of decisive and crucial importance to regard migration...... as a ‘social relation’. This is simply because successful migration has to be a harmonious synergy between the migrants (and also the sending countries where they come from) and the receiving society (and its people). As indicated, migrants enter into the receiving society not merely as a passive commodity...

  9. Dynamic modelling and hardware-in-the-loop testing of PEMFC

    Energy Technology Data Exchange (ETDEWEB)

    Vath, Andreas; Soehn, Matthias; Nicoloso, Norbert; Hartkopf, Thomas [Technische Universitaet Darmstadt/Institut fuer Elektrische Energie wand lung, Landgraf-Georg-Str. 4, D-64283 Darmstadt (Germany); Lemes, Zijad; Maencher, Hubert [MAGNUM Automatisierungstechnik GmbH, Bunsenstr. 22, D-64293 Darmstadt (Germany)

    2006-07-03

    Modelling and hardware-in-the-loop (HIL) testing of fuel cell components and entire systems open new ways for the design and advance development of FCs. In this work proton exchange membrane fuel cells (PEMFC) are dynamically modelled within MATLAB-Simulink at various operation conditions in order to establish a comprehensive description of their dynamic behaviour as well as to explore the modelling facility as a diagnostic tool. Set-up of a hardware-in-the-loop (HIL) system enables real time interaction between the selected hardware and the model. The transport of hydrogen, nitrogen, oxygen, water vapour and liquid water in the gas diffusion and catalyst layers of the stack are incorporated into the model according to their physical and electrochemical characteristics. Other processes investigated include, e.g., the membrane resistance as a function of the water content during fast load changes. Cells are modelled three-dimensionally and dynamically. In case of system simulations a one-dimensional model is preferred to reduce computation time. The model has been verified by experiments with a water-cooled stack. (author)

  10. Wireless Energy Harvesting Two-Way Relay Networks with Hardware Impairments.

    Science.gov (United States)

    Peng, Chunling; Li, Fangwei; Liu, Huaping

    2017-11-13

    This paper considers a wireless energy harvesting two-way relay (TWR) network where the relay has energy-harvesting abilities and the effects of practical hardware impairments are taken into consideration. In particular, power splitting (PS) receiver is adopted at relay to harvests the power it needs for relaying the information between the source nodes from the signals transmitted by the source nodes, and hardware impairments is assumed suffered by each node. We analyze the effect of hardware impairments [-20]on both decode-and-forward (DF) relaying and amplify-and-forward (AF) relaying networks. By utilizing the obtained new expressions of signal-to-noise-plus-distortion ratios, the exact analytical expressions of the achievable sum rate and ergodic capacities for both DF and AF relaying protocols are derived. Additionally, the optimal power splitting (OPS) ratio that maximizes the instantaneous achievable sum rate is formulated and solved for both protocols. The performances of DF and AF protocols are evaluated via numerical results, which also show the effects of various network parameters on the system performance and on the OPS ratio design.

  11. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  12. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware

    International Nuclear Information System (INIS)

    Kole, J S; Beekman, F J

    2006-01-01

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases

  13. Migration Process Evolution in Europe

    Directory of Open Access Journals (Sweden)

    Carmen Tudorache

    2006-08-01

    Full Text Available The migration phenomenon has always existed, fluctuating by the historic context, the economic, political, social and demographic disparities between the Central and East European countries and the EU Member States, the interdependencies between the origin and receiving countries and the European integration process evolutions. In the European Union, an integrated and inclusive approach of the migration issue is necessary. But a common policy on migration rests an ambitious objective. A common approach of the economic migration management and the harmonization of the migration policies of the Member States represented a challenge for the European Union and will become urgent in the future, especially due to the demographic ageing.

  14. [International migration in the Middle East].

    Science.gov (United States)

    Beauge, G

    1985-01-01

    This special issue contains a selection of 12 papers by various authors on aspects of international migration to the Middle East. Papers are included on the impact of migration on socioeconomic development, income distribution, and rural capitalization in Egypt; migration from rural Lebanon; the effect of emigration on Pakistan; Indian workers in Oman; inter-Arab migration and development; the role of the state in migration in the Arab peninsula; the dynamics of manpower in Kuwait; the Iraqi model and Arab unity; and the impact of this migration on the concept of the New Economic Order.

  15. EMPIRICAL REFLECTIONS ON MIGRATION PHENOMENON. MAJOR EFFECTS OF MIGRATION ON THE HUMAN CAPITAL

    Directory of Open Access Journals (Sweden)

    Simona BUTA

    2013-12-01

    Full Text Available The paper Empirical reflections on migration phenomenon. Major effects of migration on the human capital analyzes the migration flows of the workforce (as part of the human capital globally/regionally, especially the highly qualified workforce migration. The qualified manpower processes of attracting on the work market have not been always well understood and, in some cases, have generated a series of difficulties. This is the reason why we will focus on the „waste of brains” phenomenon, which appears when highly qualified individuals are neither employed in the source-country nor in the target country; and, if they are, their job is below their qualifications.

  16. International migration and the gender

    OpenAIRE

    Koropecká, Markéta

    2010-01-01

    My bachelor thesis explores the connection between international migration and gender. Gender, defined as a social, not a biological term, has a huge impact on the migration process. Statistics and expert studies that have been gender sensitive since 1970s demonstrate that women form half of the amount of the international migrants depending on the world region and representing a wide range of the kinds of international migration: family formation and reunification, labour migration, illegal ...

  17. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  18. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  19. Cash seeking behaviour and migration: a place-to-place migration function for Cote d'Ivoire.

    Science.gov (United States)

    Velenchik, A D

    1993-12-01

    "This paper presents estimates of an aggregate place-to-place migration function for Cote d'Ivoire based on the premise that migration is motivated by rural residents' desire for cash income. The results indicate that migration from a region responds differently to changes in cash and food income, which supports the idea that it is the composition of rural income, and not just its level, that determines migration flows." excerpt

  20. Bird Migration Under Climate Change - A Mechanistic Approach Using Remote Sensing

    Science.gov (United States)

    Smith, James A.; Blattner, Tim; Messmer, Peter

    2010-01-01

    migratory shorebirds in the central fly ways of North America. We demonstrated the phenotypic plasticity of a migratory population of Pectoral sandpipers consisting of an ensemble of 10,000 individual birds in response to changes in stopover locations using an individual based migration model driven by remotely sensed land surface data, climate data and biological field data. With the advent of new computing capabilities enabled hy recent GPU-GP computing paradigms and commodity hardware, it now is possible to simulate both larger ensemble populations and to incorporate more realistic mechanistic factors into migration models. Here, we take our first steps use these tools to study the impact of long-term drought variability on shorebird survival.

  1. A Framework for Dynamically-Loaded Hardware Library (HLL) in FPGA Acceleration

    DEFF Research Database (Denmark)

    Cardarilli, Gian Carlo; Di Carlo, Leonardo; Nannarelli, Alberto

    2016-01-01

    Hardware acceleration is often used to address the need for speed and computing power in embedded systems. FPGAs always represented a good solution for HW acceleration and, recently, new SoC platforms extended the flexibility of the FPGAs by combining on a single chip both high-performance CPUs...... and FPGA fabric. The aim of this work is the implementation of hardware accelerators for these new SoCs. The innovative feature of these accelerators is the on-the-fly reconfiguration of the hardware to dynamically adapt the accelerator’s functionalities to the current CPU workload. The realization...... of the accelerators preliminarily requires also the profiling of both the SW (ARM CPU + NEON Units) and HW (FPGA) performance, an evaluation of the partial reconfiguration times and the development of an applicationspecific IP-cores library. This paper focuses on the profiling aspect of both the SW and HW...

  2. Efficient Hardware Implementation For Fingerprint Image Enhancement Using Anisotropic Gaussian Filter.

    Science.gov (United States)

    Khan, Tariq Mahmood; Bailey, Donald G; Khan, Mohammad A U; Kong, Yinan

    2017-05-01

    A real-time image filtering technique is proposed which could result in faster implementation for fingerprint image enhancement. One major hurdle associated with fingerprint filtering techniques is the expensive nature of their hardware implementations. To circumvent this, a modified anisotropic Gaussian filter is efficiently adopted in hardware by decomposing the filter into two orthogonal Gaussians and an oriented line Gaussian. An architecture is developed for dynamically controlling the orientation of the line Gaussian filter. To further improve the performance of the filter, the input image is homogenized by a local image normalization. In the proposed structure, for a middle-range reconfigurable FPGA, both parallel compute-intensive and real-time demands were achieved. We manage to efficiently speed up the image-processing time and improve the resource utilization of the FPGA. Test results show an improved speed for its hardware architecture while maintaining reasonable enhancement benchmarks.

  3. Samtidskunst og migration

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    2010-01-01

    "Samtidskunst og migration. En oversigt over faglitteraturen" er en forskningsoversigt der gør status over hvad der hidtil er skrevet inden for det kunsthistoriske område om vor tids billedkunst og migration som politisk, socialt og kulturelt fænomen, primært i forbindelse med immigration til...

  4. RSA migration of total knee replacements.

    Science.gov (United States)

    Pijls, Bart G; Plevier, José W M; Nelissen, Rob G H H

    2018-06-01

    Purpose - We performed a systematic review and meta-analyses to evaluate the early and long-term migration patterns of tibial components of TKR of all known RSA studies. Methods - Migration pattern was defined as at least 2 postoperative RSA follow-up moments. Maximal total point motion (MTPM) at 6 weeks, 3 months, 6 months, 1 year, 2 years, 5 years, and 10 years were considered. Results - The literature search yielded 1,167 hits of which 53 studies were included, comprising 111 study groups and 2,470 knees. The majority of the early migration occurred in the first 6 months postoperatively followed by a period of stability, i.e., no or very little migration. Cemented and uncemented tibial components had different migration patterns. For cemented tibial components there was no difference in migration between all-poly and metal-backed components, between mobile bearing and fixed bearing, between cruciate retaining and posterior stabilized. Furthermore, no difference existed between TKR measured with model-based RSA or marker-based RSA methods. For uncemented TKR there was some variation in migration with the highest migration for uncoated TKR. Interpretation - The results from this meta-analysis on RSA migration of TKR are in line with both the survival analyses results from joint registries of these TKRs as well as revision rates results from meta-analyses, thus providing further proof for the association between early migration and late revision for loosening. The pooled migration patterns can be used both as benchmarks and for defining migration thresholds for future evaluation of new TKR.

  5. Current Migration Movements in Europe

    Directory of Open Access Journals (Sweden)

    Jelena Zlatković Winter

    2004-09-01

    Full Text Available After a brief historical review of migrations in Europe, the paper focuses on current migration trends and their consequences. At the end of the 1950s, Western Europe began to recruit labour from several Mediterranean countries – Italy, Spain, Portugal and former Yugoslavia, and later from Morocco, Algeria, Tunisia and Turkey. Some countries, such as France, Great Britain and the Netherlands, recruited also workers from their former colonies. In 1970 Germany had the highest absolute number of foreigners, followed by France, and then Switzerland and Belgium. The total number of immigrants in Western Europe was twelve million. During the 1970s mass recruitment of foreign workers was abandoned, and only the arrival of their family members was permitted, which led to family reunification in the countries of employment. Europe closed its borders, with the result that clandestine migration increased. The year 1989 was a turning point in the history of international migrations. The political changes in Central and Eastern Europe brought about mass migration to the West, which culminated in the so-called “mass movement of 1989–1990”. The arrival of ethnic Germans in Germany, migration inside and outside of the territory of the former Soviet Union, an increase in the number of asylum seekers and displaced persons, due to armed conflicts, are – according to the author – the main traits of current migration. The main part of the paper discusses the causes and effects of this mass wave, as well as trends in labour migration, which is still present. The second part of the paper, after presenting a typology of migrations, deals with the complex processes that brought about the formation of new communities and led to the phenomenon of new ethnic minorities and to corresponding migration policies in Western European countries that had to address these issues.

  6. Investigation of force, contact area, and dwell time in finger-tapping tasks on membrane touch interface.

    Science.gov (United States)

    Liu, Na; Yu, Ruifeng

    2018-06-01

    This study aimed to determine the touch characteristics during tapping tasks on membrane touch interface and investigate the effects of posture and gender on touch characteristics variables. One hundred participants tapped digits displayed on a membrane touch interface on sitting and standing positions using all fingers of the dominant hand. Touch characteristics measures included average force, contact area, and dwell time. Across fingers and postures, males exerted larger force and contact area than females, but similar dwell time. Across genders and postures, thumb exerted the largest force and the force of the other four fingers showed no significant difference. The contact area of the thumb was the largest, whereas that of the little finger was the smallest; the dwell time of the thumb was the longest, whereas that of the middle finger was the shortest. Relationships among finger sizes, gender, posture and touch characteristics were proposed. The findings helped direct membrane touch interface design for digital and numerical control products from hardware and software perspectives. Practitioner Summary: This study measured force, contact area, and dwell time in tapping tasks on membrane touch interface and examined effects of gender and posture on force, contact area, and dwell time. The findings will direct membrane touch interface design for digital and numerical control products from hardware and software perspectives.

  7. The Transformative Forces of Migration: Refugees and the Re-Configuration of Migration Societies

    Directory of Open Access Journals (Sweden)

    Ulrike Hamann

    2018-03-01

    Full Text Available In this thematic issue, we attempt to show how migrations transform societies at the local and micro level by focusing on how migrants and refugees navigate within different migration regimes. We pay particular attention to the specific formation of the migration regimes that these countries adopt, which structure the conditions of the economic, racialised, gendered, and sexualized violence and exploitation during migration processes. This interactive process of social transformation shapes individual experiences while also being shaped by them. We aim to contribute to the most recent and challenging question of what kind of political and social changes can be observed and how to frame these changes theoretically if we look at local levels while focusing on struggles for recognition, rights, and urban space. We bring in a cross-country comparative perspective, ranging from Canada, Chile, Spain, Sweden, Turkey, and to Germany in order to lay out similarities and differences in each case, within which our authors analyse these transformative forces of migration.

  8. Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.

    Science.gov (United States)

    Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.

    2002-01-01

    Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…

  9. Contamination Examples and Lessons from Low Earth Orbit Experiments and Operational Hardware

    Science.gov (United States)

    Pippin, Gary; Finckenor, Miria M.

    2009-01-01

    Flight experiments flown on the Space Shuttle, the International Space Station, Mir, Skylab, and free flyers such as the Long Duration Exposure Facility, the European Retrievable Carrier, and the EFFU, provide multiple opportunities for the investigation of molecular contamination effects. Retrieved hardware from the Solar Maximum Mission satellite, Mir, and the Hubble Space Telescope has also provided the means gaining insight into contamination processes. Images from the above mentioned hardware show contamination effects due to materials processing, hardware storage, pre-flight cleaning, as well as on-orbit events such as outgassing, mechanical failure of hardware in close proximity, impacts from man-made debris, and changes due to natural environment factors.. Contamination effects include significant changes to thermal and electrical properties of thermal control surfaces, optics, and power systems. Data from several flights has been used to develop a rudimentary estimate of asymptotic values for absorptance changes due to long-term solar exposure (4000-6000 Equivalent Sun Hours) of silicone-based molecular contamination deposits of varying thickness. Recommendations and suggestions for processing changes and constraints based on the on-orbit observed results will be presented.

  10. SEnviro: A Sensorized Platform Proposal Using Open Hardware and Open Standards

    Directory of Open Access Journals (Sweden)

    Sergio Trilles

    2015-03-01

    Full Text Available The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN. The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things. Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and theWeb of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented.

  11. SEnviro: a sensorized platform proposal using open hardware and open standards.

    Science.gov (United States)

    Trilles, Sergio; Luján, Alejandro; Belmonte, Óscar; Montoliu, Raúl; Torres-Sospedra, Joaquín; Huerta, Joaquín

    2015-03-06

    The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN). The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP) communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things). Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and theWeb of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC) SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented.

  12. Globalization, Migration and Development

    Directory of Open Access Journals (Sweden)

    George, Susan

    2002-01-01

    Full Text Available EnglishMigration may become the most important branch of demography in the earlydecades of the new millennium in a rapidly globalizing world. This paper discusses the causes, costsand benefits of international migration to countries of the South and North, and key issues of commonconcern. International migration is as old as national boundaries, though its nature, volume,direction, causes and consequences have changed. The causes of migration are rooted in the rate ofpopulation growth and the proportion of youth in the population, their education and training,employment opportunities, income differentials in society, communication and transportationfacilities, political freedom and human rights and level of urbanization. Migration benefits the Souththrough remittances of migrants, improves the economic welfare of the population (particularly womenof South countries generally, increases investment, and leads to structural changes in the economy.However, emigration from the South has costs too, be they social or caused by factors such as braindrain. The North also benefits by migration through enhancement of economic growth, development ofnatural resources, improved employment prospects, social development and through exposure toimmigrants' new cultures and lifestyles. Migration also has costs to the North such as of immigrantintegration, a certain amount of destabilization of the economy, illegal immigration, and socialproblems of discrimination and exploitation. Issues common to both North and South include impact onprivate investment, trade, international cooperation, and sustainable development. Both North andSouth face a dilemma in seeking an appropriate balance between importing South's labour or itsproducts and exporting capital and technology from the North.FrenchLa migration est sans doute devenue la partie la plus importante de la démographie des premières décennies du nouveau millénaire dans un monde qui change rapidement. Ce

  13. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  14. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  15. Mechanical Design of a Performance Test Rig for the Turbine Air-Flow Task (TAFT)

    Science.gov (United States)

    Forbes, John C.; Xenofos, George D.; Farrow, John L.; Tyler, Tom; Williams, Robert; Sargent, Scott; Moharos, Jozsef

    2004-01-01

    To support development of the Boeing-Rocketdyne RS84 rocket engine, a full-flow, reaction turbine geometry was integrated into the NASA-MSFC turbine air-flow test facility. A mechanical design was generated which minimized the amount of new hardware while incorporating all test and instrumentation requirements. This paper provides details of the mechanical design for this Turbine Air-Flow Task (TAFT) test rig. The mechanical design process utilized for this task included the following basic stages: Conceptual Design. Preliminary Design. Detailed Design. Baseline of Design (including Configuration Control and Drawing Revision). Fabrication. Assembly. During the design process, many lessons were learned that should benefit future test rig design projects. Of primary importance are well-defined requirements early in the design process, a thorough detailed design package, and effective communication with both the customer and the fabrication contractors.

  16. How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex

    LENUS (Irish Health Repository)

    Setty, Yaki

    2011-09-30

    Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1) the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2) we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1) under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2) under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a precise

  17. How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex

    Directory of Open Access Journals (Sweden)

    Skoblov Nikita

    2011-09-01

    Full Text Available Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1 the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2 we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1 under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2 under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a

  18. MIGRATION IMPACT ON ECONOMICAL SITUATION

    Directory of Open Access Journals (Sweden)

    Virginia COJOCARU

    2016-01-01

    Full Text Available This paper presents recent trends and flows of labor migration and its impact on economic and social life. Main aim of this research sets up the influence of the migration on the European economics and its competitiveness. Methods of research are: method of comparison, analysis method, method of deduction, method of statistics, modeling method. The economic impact of migration has been intensively studied but is still often driven by ill-informed perceptions, which, in turn, can lead to public antagonism towards migration. These negative views risk jeopardising efforts to adapt migration policies to the new economic and demographic challenges facing many countries. Migration Policy looks at the evidence for how immigrants affect the economy in three main areas: The labour market, public purse and economic growth. In Europe, the scope of labour mobility greatly increased within the EU/EFTA zones following the EU enlargements of 2004, 2007 and 2014-2015. This added to labour markets’ adjustment capacity. Recent estimates suggest that as much as a quarter of the asymmetric labour market shock – that is occurring at different times and with different intensities across countries – may have been absorbed by migration within a year.

  19. An interactive audio-visual installation using ubiquitous hardware and web-based software deployment

    Directory of Open Access Journals (Sweden)

    Tiago Fernandes Tavares

    2015-05-01

    Full Text Available This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.

  20. Ultrasound gel minimizes third body debris with partial hardware removal in joint arthroplasty

    Directory of Open Access Journals (Sweden)

    Aidan C. McGrory

    2017-03-01

    Full Text Available Hundreds of thousands of revision surgeries for hip, knee, and shoulder joint arthroplasties are now performed worldwide annually. Partial removal of hardware during some types of revision surgeries may create significant amounts of third body metal, polymer, or bone cement debris. Retained debris may lead to a variety of negative health effects including damage to the joint replacement. We describe a novel technique for the better containment and easier removal of third body debris during partial hardware removal. We demonstrate hardware removal on a hip joint model in the presence and absence of water-soluble gel to depict the reduction in metal debris volume and area of spread.

  1. Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.

    Science.gov (United States)

    Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C

    2016-04-01

    As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7

  2. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  3. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  4. Interfacing Hardware Accelerators to a Time-Division Multiplexing Network-on-Chip

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Sørensen, Rasmus Bo; Schoeberl, Martin

    2015-01-01

    This paper addresses the integration of stateless hardware accelerators into time-predictable multi-core platforms based on time-division multiplexing networks-on-chip. Stateless hardware accelerators, like floating-point units, are typically attached as co-processors to individual processors in ...... implementation. The design evaluation is carried out using the open source T-CREST multi-core platform implemented on an Altera Cyclone IV FPGA. The size of the proposed design, including a floating-point accelerator, is about two-thirds of a processor....

  5. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  6. Another way of doing RSA cryptography in hardware

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Honary, B.

    2001-01-01

    In this paper we describe an efficient and secure hardware implementation of the RSA cryptosystem. Modular exponentiation is based on Montgomery’s method without any modular reduction achieving the optimal bound. The presented systolic array architecture is scalable in severalparameters which makes

  7. Hardware-Oblivious Parallelism for In-Memory Column-Stores

    NARCIS (Netherlands)

    M. Heimel; M. Saecker; H. Pirk (Holger); S. Manegold (Stefan); V. Markl

    2013-01-01

    htmlabstractThe multi-core architectures of today’s computer systems make parallelism a necessity for performance critical applications. Writing such applications in a generic, hardware-oblivious manner is a challenging problem: Current database systems thus rely on labor-intensive and error-prone

  8. Generalized Distance Transforms and Skeletons in Graphics Hardware

    NARCIS (Netherlands)

    Strzodka, R.; Telea, A.

    2004-01-01

    We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the

  9. 34 CFR 464.42 - What limit applies to purchasing computer hardware and software?

    Science.gov (United States)

    2010-07-01

    ... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...

  10. Discomforts of tasks video users: The case of workers of ENEA area in Bologna (Italy)

    International Nuclear Information System (INIS)

    Cenni, P.; Varasano, R.; Cavallaro, E.; Arduini, R.; Mattioli, S.; Tuozzi, G.

    1996-08-01

    The aim of this study is to investigate the subjective musculoskeletal discomfort in a sample of 334 workers (204 males and 130 females) of the ENEA area in Bologna, using for different periods and tasks video display units (VDU). On a preliminary self report questionnaire the workers have reported: hours at VDU per day, type of task type of hardware, physical-ergonomic environmental conditions, equipment and layout of the work station. Then, on a case-history form, the researcher has recorded for each subject: anamnesis, fatigue symptoms, discomfort and pain perceived in the back, shoulder, hand and arm regions. The results have shown that, in this sample, the main cause for potential musculoskeletal problems is related to aging whereas the women mainly engaged in repetitive jobs seem to be more likely subject to these troubles. About the correlation between type of task and musculoskeletal discomfort, the females refer more complaints than the males, especially when the task is repetitive. Also the number of hours spent at VDU per day is higher in the females. Finally, about physical-ergonomic environmental conditions, most users have not referred discomforts related to lighting, noise, micro climate or to work station (equipment and layout)

  11. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using {sup 18}F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in

  12. Infected hardware after surgical stabilization of rib fractures: Outcomes and management experience.

    Science.gov (United States)

    Thiels, Cornelius A; Aho, Johnathon M; Naik, Nimesh D; Zielinski, Martin D; Schiller, Henry J; Morris, David S; Kim, Brian D

    2016-05-01

    Surgical stabilization of rib fracture (SSRF) is increasingly used for treatment of rib fractures. There are few data on the incidence, risk factors, outcomes, and optimal management strategy for hardware infection in these patients. We aimed to develop and propose a management algorithm to help others treat this potentially morbid complication. We retrospectively searched a prospectively collected rib fracture database for the records of all patients who underwent SSRF from August 2009 through March 2014 at our institution. We then analyzed for the subsequent development of hardware infection among these patients. Standard descriptive analyses were performed. Among 122 patients who underwent SSRF, most (73%) were men; the mean (SD) age was 59.5 (16.4) years, and median (interquartile range [IQR]) Injury Severity Score was 17 (13-22). The median number of rib fractures was 7 (5-9) and 48% of the patients had flail chest. Mortality at 30 days was 0.8%. Five patients (4.1%) had a hardware infection on mean (SD) postoperative day 12.0 (6.6). Median Injury Severity Score (17 [range, 13-42]) and hospital length of stay (9 days [6-37 days]) in these patients were similar to the values for those without infection (17 days [range, 13-22 days] and 9 days [6-12 days], respectively). Patients with infection underwent a median (IQR) of 2 (range, 2-3) additional operations, which included wound debridement (n = 5), negative-pressure wound therapy (n = 3), and antibiotic beads (n = 4). Hardware was removed in 3 patients at 140, 190, and 192 days after index operation. Cultures grew only gram-positive organisms. No patients required reintervention after hardware removal, and all achieved bony union and were taking no narcotics or antibiotics at the latest follow-up. Although uncommon, hardware infection after SSRF carries considerable morbidity. With the use of an aggressive multimodal management strategy, however, bony union and favorable long-term outcomes can be achieved

  13. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  14. Migration processes in SCO member states

    Directory of Open Access Journals (Sweden)

    Valentina Sergeevna Antonyuk

    2012-12-01

    Full Text Available The article concerns modern state and development of migration processes in SCO member states. As a main method of research statistical analysis was applied. The article shows that migration streams between SCO member states are rather intensive, and the problem of labor migration becomes more and more urgent. The countries of consuming and supplying of labour force are clearly differentiated in the region. For some countries, labor export is the key sector of economy. At the same time, interstate relations between SCO member states sometimes are rather disputed. The most urgent factors causing the development of migration processes in the region were determined. Among them, thefactor of growing outflows from China isespecially noted. It is noted that migration processes are discussed by SCO member states nowadays in terms of illegal migration and international criminality connected with it. It means that the question of labor migration is a real problem. It is indicated that the creation of a specific joint commission on migration policy affiliated with the Council of Foreign Ministers of SCO member states is the necessary condition of effective interaction in migration questions within the framework of Shanghai Cooperation Organization.

  15. Co-verification of hardware and software for ARM SoC design

    CERN Document Server

    Andrews, Jason

    2004-01-01

    Hardware/software co-verification is how to make sure that embedded system software works correctly with the hardware, and that the hardware has been properly designed to run the software successfully -before large sums are spent on prototypes or manufacturing. This is the first book to apply this verification technique to the rapidly growing field of embedded systems-on-a-chip(SoC). As traditional embedded system design evolves into single-chip design, embedded engineers must be armed with the necessary information to make educated decisions about which tools and methodology to deploy. SoC verification requires a mix of expertise from the disciplines of microprocessor and computer architecture, logic design and simulation, and C and Assembly language embedded software. Until now, the relevant information on how it all fits together has not been available. Andrews, a recognized expert, provides in-depth information about how co-verification really works, how to be successful using it, and pitfalls to avoid. H...

  16. Development of a Multi-Arm Mobile Robot for Nuclear Decommissioning Tasks

    Directory of Open Access Journals (Sweden)

    Mohamed J. Bakari

    2008-11-01

    Full Text Available This paper concerns the design of a two-arm mobile delivery platform for application within nuclear decommissioning tasks. The adoption of the human arm as a model of manoeuvrability, scale and dexterity is the starting point for operation of two seven-function arms within the context of nuclear decommissioning tasks, the selection of hardware and its integration, and the development of suitable control methods. The forward and inverse kinematics for the manipulators are derived and the proposed software architecture identified to control the movements of the arm joints and the performance of selected decommissioning tasks. We discuss the adoption of a BROKK demolition machine as a mobile platform and the integration with its hydraulic system to operate the two seven-function manipulators separately. The paper examines the modelling and development of a real-time control method using Proportional-Integral-Derivative (PID and Proportional-Integral-Plus (PIP control algorithms in the host computer with National Instruments functions and tools to control the manipulators and obtain feedback through wireless communication. Finally we consider the application of a third party device, such as a personal mobile phone, and its interface with LabVIEW software in order to operate the robot arms remotely.

  17. Digital Hardware Realization of Forward and Inverse Kinematics for a Five-Axis Articulated Robot Arm

    Directory of Open Access Journals (Sweden)

    Bui Thi Hai Linh

    2015-01-01

    Full Text Available When robot arm performs a motion control, it needs to calculate a complicated algorithm of forward and inverse kinematics which consumes much CPU time and certainty slows down the motion speed of robot arm. Therefore, to solve this issue, the development of a hardware realization of forward and inverse kinematics for an articulated robot arm is investigated. In this paper, the formulation of the forward and inverse kinematics for a five-axis articulated robot arm is derived firstly. Then, the computations algorithm and its hardware implementation are described. Further, very high speed integrated circuits hardware description language (VHDL is applied to describe the overall hardware behavior of forward and inverse kinematics. Additionally, finite state machine (FSM is applied for reducing the hardware resource usage. Finally, for verifying the correctness of forward and inverse kinematics for the five-axis articulated robot arm, a cosimulation work is constructed by ModelSim and Simulink. The hardware of the forward and inverse kinematics is run by ModelSim and a test bench which generates stimulus to ModelSim and displays the output response is taken in Simulink. Under this design, the forward and inverse kinematics algorithms can be completed within one microsecond.

  18. Using Innovative Techniques for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  19. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    International Nuclear Information System (INIS)

    Arcidiacono, R.; Brigljevic, V.; Bruno, G.; Cano, E.; Cittolin, S.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Gulmini, M.; Gutleber, J.; Jacobs, C.; Kreuzer, P.; Lo Presti, G.; Magrans, I.; Marinelli, N.; Maron, G.; Meijers, F.; Meschi, E.; Murray, S.; Nafria, M.; Oh, A.; Orsini, L.; Pieri, M.; Pollet, L.; Racz, A.; Rosinsky, P.; Schwick, C.; Sphicas, P.; Varela, J.

    2005-01-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface

  20. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    Energy Technology Data Exchange (ETDEWEB)

    Arcidiacono, R. [Massachusetts Institute of Technology, Cambridge, MA (United States); Brigljevic, V. [CERN, Geneva (Switzerland); Rudjer Boskovic Institute, Zagreb (Croatia); Bruno, G. [CERN, Geneva (Switzerland); Cano, E. [CERN, Geneva (Switzerland); Cittolin, S. [CERN, Geneva (Switzerland); Erhan, S. [University of California, Los Angeles, Los Angeles, CA (United States); Gigi, D. [CERN, Geneva (Switzerland); Glege, F. [CERN, Geneva (Switzerland); Gomez-Reino, R. [CERN, Geneva (Switzerland); Gulmini, M. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); CERN, Geneva (Switzerland); Gutleber, J. [CERN, Geneva (Switzerland); Jacobs, C. [CERN, Geneva (Switzerland); Kreuzer, P. [University of Athens, Athens (Greece); Lo Presti, G. [CERN, Geneva (Switzerland); Magrans, I. [CERN, Geneva (Switzerland) and Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain)]. E-mail: ildefons.magrans@cern.ch; Marinelli, N. [Institute of Accelerating Systems and Applications, Athens (Greece); Maron, G. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); Meijers, F. [CERN, Geneva (Switzerland); Meschi, E. [CERN, Geneva (Switzerland); Murray, S. [CERN, Geneva (Switzerland); Nafria, M. [Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain); Oh, A. [CERN, Geneva (Switzerland); Orsini, L. [CERN, Geneva (Switzerland); Pieri, M. [University of California, San Diago, San Diago, CA (United States); Pollet, L. [CERN, Geneva (Switzerland); Racz, A. [CERN, Geneva (Switzerland); Rosinsky, P. [CERN, Geneva (Switzerland); Schwick, C. [CERN, Geneva (Switzerland); Sphicas, P. [University of Athens, Athens (Greece); CERN, Geneva (Switzerland); Varela, J. [LIP, Lisbon (Portugal); CERN, Geneva (Switzerland)

    2005-07-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.

  1. Pre-migration persecution, post-migration stressors and resources, and post-migration mental health: A study of severely traumatized U.S. Arab immigrant women

    Science.gov (United States)

    Norris, Anne E.; Aroian, Karen J.; Nickerson, David

    2015-01-01

    Background Competing theories exist regarding the importance of pre-migration trauma as compared to post-migration stressors and resources with respect to the risk to immigrant mental health. Objective To examine how type of pre-migration trauma, post-migration stressors, and post-migration resources differentially predict PTSD and MDD symptomatology in Arab immigrant women who have been exposed to pre-migration trauma. Design Descriptive; using multinomial logistic regression to explain membership in one of four groups: (a) PTSD only (n = 14); (b) major depressive disorder (MDD) (n = 162), (c) Co-Morbid PTSD-MDD (n = 148), (d) Subclinical Symptoms (n = 209). Results Post-immigration related stressors (as measured by the Demands of Immigration (DI)) had the strongest effect: Parameter estimates indicated that a unit increase in DI scores was associated with a nearly 17 fold increase in the likelihood of being in the Co-morbid relative to the Subclinical group, and a nearly 2.5 increase in the likelihood of being in the Co-Morbid relative to the MDD only group (p Arab immigrant women for depression and PTSD is important given high levels observed in this community based sample. PMID:21835819

  2. Migrations of theory, method and practice: a reflection on themes in migration studies (Review article)

    OpenAIRE

    Palmary, Ingrid

    2009-01-01

    In this review article, I offer some reflection on three themes in migration research, namely, the categorisation and quantification of migration, the role of trauma and distress in such categorisation, and the feminisation of migration. I was prompted to explore these three themes after reading a recent publication on migration in southern Africa (edited by Kok, Gelderblom, Oucho and Van Zyl, 2006). In this paper I raise these as three areas that appear to be determining the boundaries of th...

  3. College Student Migration.

    Science.gov (United States)

    Fenske, Robert H.; And Others

    This study examines the background characteristics of two large national samples of first-time enrolled freshmen who (a) attended college within their state of residence but away from their home community, (b) migrated to a college in an adjacent state, (c) migrated to a college in a distant state, and (d) attended college in their home community.…

  4. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  5. Diseño hardware de una tarjeta de control y comunicaciones

    OpenAIRE

    Sánchez Salvador, David

    2017-01-01

    En la actualidad convivimos con infinidad de sistemas electrónicos, desde pequeños weareables como las pulseras inteligentes a grandes equipos como radares, pasando por equipos sin ningún tipo de lógica programada como una radio. Estos dispositivos electrónicos se desarrollan en base a un software generalmente, obviando la diferencia de complejidad según la aplicación, pero todos ellos se crean sobre un hardware. Dicho hardware puede ser una PCB sencilla con algunos componentes o un conjunto ...

  6. Performance Estimation for Hardware/Software codesign using Hierarchical Colored Petri Nets

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan; Jerraya, Ahmed-Amine

    1998-01-01

    This paper presents an approach for abstract modeling of the functional behavior of hardware architectures using Hierarchical Colored Petri Nets (HCPNs). Using HCPNs as architectural models has several advantages such as higher estimation accuracy, higher flexibility, and the need for only one...... estimation tool. This makes the approach very useful for designing component models used for performance estimation in Hardware/Software Codesign frameworks such as the LYCOS system. The paper presents the methodology and rules for designing component models using HCPNs. Two examples of architectural models...

  7. Trustworthy reconfigurable systems enhancing the security capabilities of reconfigurable hardware architectures

    CERN Document Server

    Feller, Thomas

    2014-01-01

    ?Thomas Feller sheds some light on trust anchor architectures fortrustworthy reconfigurable systems. He is presenting novel concepts enhancing the security capabilities of reconfigurable hardware.Almost invisible to the user, many computer systems are embedded into everyday artifacts, such as cars, ATMs, and pacemakers. The significant growth of this market segment within the recent years enforced a rethinking with respect to the security properties and the trustworthiness of these systems. The trustworthiness of a system in general equates to the integrity of its system components. Hardware-b

  8. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  9. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  10. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  11. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    Directory of Open Access Journals (Sweden)

    D. Chitra Devi

    2016-01-01

    Full Text Available Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM, the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  12. National and international graduate migration flows.

    Science.gov (United States)

    Mosca, Irene; Wright, Robert E

    2010-01-01

    This article examines the nature of national and international graduate migration flows in the UK. Migration equations are estimated with microdata from a matched dataset of Students and Destinations of Leavers from Higher Education, information collected by the Higher Education Statistical Agency. The probability of migrating is related to a set of observable characteristics using multinomial logit regression. The analysis suggests that migration is a selective process with graduates with certain characteristics having considerably higher probabilities of migrating, both to other regions of the UK and abroad.

  13. Nuss bar migrations: occurrence and classification

    International Nuclear Information System (INIS)

    Binkovitz, Lauren E.; Binkovitz, Larry A.; Zendejas, Benjamin; Moir, Christopher R.

    2016-01-01

    Pectus excavatum results from dorsal deviation of the sternum causing narrowing of the anterior-posterior diameter of the chest. It can result in significant cosmetic deformities and cardiopulmonary compromise if severe. The Nuss procedure is a minimally invasive technique that involves placing a thin horizontally oriented metal bar below the dorsal sternal apex for correction of the pectus deformity. To identify the frequency and types of Nuss bar migrations, to present a new categorization of bar migrations, and to present examples of true migrations and pseudomigrations. We retrospectively reviewed the electronic medical records and all pertinent radiologic studies of 311 pediatric patients who underwent a Nuss procedure. We evaluated the frequency and type of bar migrations. Bar migration was demonstrated in 23 of 311 patients (7%) and occurred within a mean period of 26 days after surgery. Bar migrations were subjectively defined as deviation of the bar from the position demonstrated on the immediate postoperative radiographs and categorized as superior, inferior, rotation, lateral or flipped using a new classification system. Sixteen of the 23 migrations required re-operation. Nuss bar migration can be diagnosed with careful evaluation of serial radiographs. Nuss bar migration has a wide variety of appearances and requires exclusion of pseudomigration resulting from changes in patient positioning between radiologic examinations. (orig.)

  14. Nuss bar migrations: occurrence and classification

    Energy Technology Data Exchange (ETDEWEB)

    Binkovitz, Lauren E.; Binkovitz, Larry A. [Mayo Clinic, Department of Radiology, Rochester, MN (United States); Zendejas, Benjamin; Moir, Christopher R. [Mayo Clinic, Department of Surgery, Rochester, MN (United States)

    2016-12-15

    Pectus excavatum results from dorsal deviation of the sternum causing narrowing of the anterior-posterior diameter of the chest. It can result in significant cosmetic deformities and cardiopulmonary compromise if severe. The Nuss procedure is a minimally invasive technique that involves placing a thin horizontally oriented metal bar below the dorsal sternal apex for correction of the pectus deformity. To identify the frequency and types of Nuss bar migrations, to present a new categorization of bar migrations, and to present examples of true migrations and pseudomigrations. We retrospectively reviewed the electronic medical records and all pertinent radiologic studies of 311 pediatric patients who underwent a Nuss procedure. We evaluated the frequency and type of bar migrations. Bar migration was demonstrated in 23 of 311 patients (7%) and occurred within a mean period of 26 days after surgery. Bar migrations were subjectively defined as deviation of the bar from the position demonstrated on the immediate postoperative radiographs and categorized as superior, inferior, rotation, lateral or flipped using a new classification system. Sixteen of the 23 migrations required re-operation. Nuss bar migration can be diagnosed with careful evaluation of serial radiographs. Nuss bar migration has a wide variety of appearances and requires exclusion of pseudomigration resulting from changes in patient positioning between radiologic examinations. (orig.)

  15. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  16. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  17. Climate change-related migration and infectious disease.

    Science.gov (United States)

    McMichael, Celia

    2015-01-01

    Anthropogenic climate change will have significant impacts on both human migration and population health, including infectious disease. It will amplify and alter migration pathways, and will contribute to the changing ecology and transmission dynamics of infectious disease. However there has been limited consideration of the intersections between migration and health in the context of a changing climate. This article argues that climate-change related migration - in conjunction with other drivers of migration - will contribute to changing profiles of infectious disease. It considers infectious disease risks for different climate-related migration pathways, including: forced displacement, slow-onset migration particularly to urban-poor areas, planned resettlement, and labor migration associated with climate change adaptation initiatives. Migration can reduce vulnerability to climate change, but it is critical to better understand and respond to health impacts - including infectious diseases - for migrant populations and host communities.

  18. Influence of visual feedback on human task performance in ITER remote handling

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, Gwendolijn Y.R., E-mail: g.schropp@heemskerk-innovative.nl [Utrecht University, Utrecht (Netherlands); Heemskerk Innovative Technology, Noordwijk (Netherlands); Heemskerk, Cock J.M. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann [Helmholtz Institute-Utrecht University, Utrecht (Netherlands); Elzendoorn, Ben S.Q. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands); Bult, David [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The performance of human operators in an ITER-like test facility for remote handling. Black-Right-Pointing-Pointer Different sources of visual feedback influence how fast one can complete a maintenance task. Black-Right-Pointing-Pointer Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  19. Influence of visual feedback on human task performance in ITER remote handling

    International Nuclear Information System (INIS)

    Schropp, Gwendolijn Y.R.; Heemskerk, Cock J.M.; Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann; Elzendoorn, Ben S.Q.; Bult, David

    2012-01-01

    Highlights: ► The performance of human operators in an ITER-like test facility for remote handling. ► Different sources of visual feedback influence how fast one can complete a maintenance task. ► Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  20. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency